Vincenzo Gallo
commited on
Commit
·
6759906
1
Parent(s):
c072eb2
Add CodeReality-1T Evaluation Subset (19GB)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- EVAL_SUBSET.md +102 -0
- Notebook/CodeReality_Full_Dataset_Exploration.ipynb +3 -0
- README.md +250 -0
- analysis/dataset_index.json +3 -0
- analysis/language_stats.json +3 -0
- analysis/metrics.json +3 -0
- benchmarks/README.md +184 -0
- benchmarks/benchmark_summary.csv +3 -0
- benchmarks/bug_detection_benchmark.py +278 -0
- benchmarks/code_completion_benchmark.py +357 -0
- benchmarks/cross_language_translation_benchmark.py +331 -0
- benchmarks/license_detection_benchmark.py +197 -0
- data/README.md +56 -0
- data/enhanced_enhanced_enhanced_sustained_batch_000066_20250902_220910.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_000101_20250829_181259.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_000138_20250829_063128.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_000750_20250829_004837.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_001012_20250830_153945.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_001122_20250905_155200.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_001200_20250908_113430.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_001231_20250909_120242.jsonl +3 -0
- data/enhanced_enhanced_enhanced_sustained_batch_001636_20250911_144513.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000012_20250829_082023.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000022_20250913_172734.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000023_20250830_052929.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000026_20250910_170950.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000037_20250901_115620.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000037_20250905_185022.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000042_20250909_195036.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000052_20250901_000828.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000055_20250906_072036.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000065_20250911_175251.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000068_20250913_181533.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000071_20250905_193927.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000075_20250905_194728.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000080_20250906_140532.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000086_20250903_223328.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000092_20250904_224315.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000100_20250902_010630.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000106_20250911_182607.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000107_20250831_125844.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000112_20250907_203743.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000116_20250831_130816.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000133_20250902_013852.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000139_20250829_183519.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000144_20250827_222718.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000145_20250905_213029.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000148_20250909_211703.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000175_20250913_200555.jsonl +3 -0
- data/enhanced_enhanced_sustained_batch_000192_20250830_192234.jsonl +3 -0
EVAL_SUBSET.md
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CodeReality-1T Evaluation Subset
|
| 2 |
+
|
| 3 |
+
## Location
|
| 4 |
+
The completed evaluation subset (19GB) is located at:
|
| 5 |
+
```
|
| 6 |
+
/mnt/z/CodeReality_Final/codereality-1t/eval/subset/data/
|
| 7 |
+
```
|
| 8 |
+
|
| 9 |
+
## Subset Statistics
|
| 10 |
+
|
| 11 |
+
### Overall Metrics
|
| 12 |
+
- **Files**: 323 JSONL files
|
| 13 |
+
- **Size**: 19.0 GB
|
| 14 |
+
- **Repositories**: 2,049 estimated
|
| 15 |
+
- **Creation**: Research value scoring with diversity sampling
|
| 16 |
+
|
| 17 |
+
### Research Characteristics
|
| 18 |
+
| Characteristic | Count | Percentage | Details |
|
| 19 |
+
|----------------|--------|------------|---------|
|
| 20 |
+
| Multi-repo files (5+ repos) | 323 | 100% | Files containing 5+ repositories each |
|
| 21 |
+
| Files with commit history | 2,049 | 100% | Complete git history available |
|
| 22 |
+
| Cross-language content | 1,495 | 73% | Repositories with multiple programming languages |
|
| 23 |
+
| Build system configurations | 798 | 39% | Makefile, package.json, build.gradle, pom.xml |
|
| 24 |
+
| Issue tracking data | 1,209 | 59% | GitHub/GitLab issues and discussions |
|
| 25 |
+
| Bug-fix commits | 1,845 | 90% | Commits identified as bug fixes |
|
| 26 |
+
| Test coverage | 1,332 | 65% | Repositories with test files |
|
| 27 |
+
| Documentation | 1,843 | 90% | README and documentation files |
|
| 28 |
+
|
| 29 |
+
### Repository Size Distribution
|
| 30 |
+
| Size Range | Repositories | Average Size |
|
| 31 |
+
|------------|-------------|--------------|
|
| 32 |
+
| Small (< 10MB) | ~800 | 3.2 MB |
|
| 33 |
+
| Medium (10-100MB) | ~1,100 | 42.1 MB |
|
| 34 |
+
| Large (> 100MB) | ~149 | 187.3 MB |
|
| 35 |
+
|
| 36 |
+
### Language Diversity (Estimated)
|
| 37 |
+
- **JavaScript/TypeScript**: ~35% of content
|
| 38 |
+
- **Python**: ~20% of content
|
| 39 |
+
- **Java/C/C++**: ~25% of content
|
| 40 |
+
- **Mixed/Other**: ~20% of content
|
| 41 |
+
|
| 42 |
+
## Structure
|
| 43 |
+
```
|
| 44 |
+
eval_subset/
|
| 45 |
+
├── data/ # 280 curated JSONL files (15.1GB)
|
| 46 |
+
├── docs/ # Usage examples and documentation
|
| 47 |
+
├── eval_metadata.json # Complete subset metadata
|
| 48 |
+
└── USAGE_EXAMPLES.md # Demonstration code examples
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
## Usage
|
| 52 |
+
The subset is ready for:
|
| 53 |
+
- Code completion benchmarks (Pass@k evaluation)
|
| 54 |
+
- License detection training/testing
|
| 55 |
+
- Cross-language analysis
|
| 56 |
+
- Bug detection studies
|
| 57 |
+
- Repository classification
|
| 58 |
+
|
| 59 |
+
## Access
|
| 60 |
+
To use the evaluation subset:
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
import json
|
| 64 |
+
import os
|
| 65 |
+
|
| 66 |
+
eval_dir = "/mnt/z/CodeReality_Final/codereality-1t/eval/subset"
|
| 67 |
+
|
| 68 |
+
# Load metadata
|
| 69 |
+
with open(os.path.join(eval_dir, 'eval_metadata.json'), 'r') as f:
|
| 70 |
+
metadata = json.load(f)
|
| 71 |
+
|
| 72 |
+
print(f"Subset: {metadata['eval_subset_info']['name']}")
|
| 73 |
+
print(f"Files: {metadata['subset_statistics']['total_files']}")
|
| 74 |
+
print(f"Size: {metadata['subset_statistics']['total_size_gb']} GB")
|
| 75 |
+
|
| 76 |
+
# Load sample data
|
| 77 |
+
data_dir = os.path.join(eval_dir, 'data')
|
| 78 |
+
for filename in os.listdir(data_dir)[:5]: # First 5 files
|
| 79 |
+
file_path = os.path.join(data_dir, filename)
|
| 80 |
+
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
| 81 |
+
for line in f:
|
| 82 |
+
repo_data = json.loads(line)
|
| 83 |
+
print(f"Repository: {repo_data.get('name', 'Unknown')}")
|
| 84 |
+
break # Just first repo from each file
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
## Benchmarks
|
| 88 |
+
Demonstration benchmarks available in `../benchmarks/`:
|
| 89 |
+
- `license_detection_benchmark.py`
|
| 90 |
+
- `code_completion_benchmark.py`
|
| 91 |
+
|
| 92 |
+
See `../benchmarks/README.md` for detailed usage instructions.
|
| 93 |
+
|
| 94 |
+
## Metadata
|
| 95 |
+
Complete metadata available in `eval_metadata.json` includes:
|
| 96 |
+
- Selection methodology
|
| 97 |
+
- Research characteristics
|
| 98 |
+
- Size distribution
|
| 99 |
+
- File manifest with checksums
|
| 100 |
+
- Recommended evaluation tasks
|
| 101 |
+
|
| 102 |
+
This subset represents a curated, research-ready portion of the complete CodeReality-1T dataset optimized for standardized evaluation and benchmarking.
|
Notebook/CodeReality_Full_Dataset_Exploration.ipynb
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bbb16b798b3792c34e14b413f3ac9148d8ae85b3612231bebaf95fe8a5e73bbb
|
| 3 |
+
size 10348
|
README.md
ADDED
|
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CodeReality-1T: Large-Scale Deliberately Noisy Code Dataset
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+

|
| 5 |
+

|
| 6 |
+

|
| 7 |
+
|
| 8 |
+
## ⚠️ Important Limitations
|
| 9 |
+
|
| 10 |
+
> **⚠️ Not Enterprise-Ready**: This dataset is deliberately noisy and designed for research only. Contains mixed/unknown licenses, possible secrets, potential security vulnerabilities, duplicate code, and experimental repositories. **Requires substantial preprocessing for production use.**
|
| 11 |
+
>
|
| 12 |
+
> **Use at your own risk** - this is a research dataset for robustness testing and data curation method development.
|
| 13 |
+
|
| 14 |
+
## Overview
|
| 15 |
+
|
| 16 |
+
**CodeReality-1T** is a large-scale, deliberately noisy code repository dataset designed for robust AI research. Contains **397,475 repositories** across **21 programming languages** in **3.05 TB** of uncompressed data, specifically curated to test robustness, data curation methods, and real-world code understanding.
|
| 17 |
+
|
| 18 |
+
### Key Features
|
| 19 |
+
- ✅ **Complete Coverage**: 100% analysis of all 397,475 repositories (no sampling)
|
| 20 |
+
- ✅ **BigCode Compliant**: Meets all community standards for transparency and reproducibility
|
| 21 |
+
- ✅ **Deliberately Noisy**: Includes duplicates, incomplete code, and experimental projects
|
| 22 |
+
- ✅ **Rich Metadata**: Enhanced Blueprint metadata with cross-domain classification
|
| 23 |
+
- ✅ **Professional Grade**: 63.7-hour comprehensive analysis with open source tools
|
| 24 |
+
|
| 25 |
+
## Quick Start
|
| 26 |
+
|
| 27 |
+
### Dataset Structure
|
| 28 |
+
```
|
| 29 |
+
codereality-1t/
|
| 30 |
+
├── data/ # Main dataset location reference
|
| 31 |
+
│ ├── README.md # Data access instructions
|
| 32 |
+
│ └── manifest.json # Integrity verification
|
| 33 |
+
├── analysis/ # Analysis results
|
| 34 |
+
│ ├── dataset_index.json # Complete file index (29MB)
|
| 35 |
+
│ └── metrics.json # Comprehensive analysis results
|
| 36 |
+
├── docs/ # Documentation
|
| 37 |
+
│ ├── DATASET_CARD.md # Comprehensive dataset card
|
| 38 |
+
│ └── LICENSE.md # Licensing information
|
| 39 |
+
└── eval/ # Evaluation subset (in progress)
|
| 40 |
+
└── subset/ # Curated 15GB research subset
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
### Loading the Dataset
|
| 44 |
+
|
| 45 |
+
```python
|
| 46 |
+
import json
|
| 47 |
+
import os
|
| 48 |
+
|
| 49 |
+
# Load dataset index
|
| 50 |
+
with open('analysis/dataset_index.json', 'r') as f:
|
| 51 |
+
index = json.load(f)
|
| 52 |
+
|
| 53 |
+
print(f"Files: {len(index['files'])}")
|
| 54 |
+
print(f"Repositories: {sum(f['repository_count'] for f in index['files'])}")
|
| 55 |
+
|
| 56 |
+
# Access data files
|
| 57 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 58 |
+
for file_info in index['files'][:5]: # First 5 files
|
| 59 |
+
file_path = file_info['path']
|
| 60 |
+
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
| 61 |
+
for line in f:
|
| 62 |
+
repo_data = json.loads(line)
|
| 63 |
+
print(f"Repository: {repo_data.get('name', 'Unknown')}")
|
| 64 |
+
break # Just first repo from each file
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Dataset Statistics
|
| 68 |
+
|
| 69 |
+
### Scale
|
| 70 |
+
- **Total Repositories**: 397,475
|
| 71 |
+
- **Total Files**: 52,692 JSONL archives
|
| 72 |
+
- **Total Size**: 3.05 TB uncompressed
|
| 73 |
+
- **Languages Detected**: 21
|
| 74 |
+
- **Analysis Coverage**: 100% (no sampling)
|
| 75 |
+
|
| 76 |
+
### Language Distribution (Top 10)
|
| 77 |
+
| Language | Repositories | Percentage |
|
| 78 |
+
|----------|-------------|------------|
|
| 79 |
+
| Unknown | 389,941 | 98.1% |
|
| 80 |
+
| Python | 4,738 | 1.2% |
|
| 81 |
+
| Shell | 4,505 | 1.1% |
|
| 82 |
+
| C | 3,969 | 1.0% |
|
| 83 |
+
| C++ | 3,339 | 0.8% |
|
| 84 |
+
| HTML | 2,487 | 0.6% |
|
| 85 |
+
| JavaScript | 2,394 | 0.6% |
|
| 86 |
+
| Go | 2,110 | 0.5% |
|
| 87 |
+
| Java | 2,026 | 0.5% |
|
| 88 |
+
| CSS | 1,655 | 0.4% |
|
| 89 |
+
|
| 90 |
+
### Duplicate Analysis
|
| 91 |
+
**Exact Duplicates**: 0% exact SHA256 duplicates detected across file-level content
|
| 92 |
+
**Semantic Duplicates**: ~18% estimated semantic duplicates and forks preserved by design
|
| 93 |
+
**Research Value**: Duplicates intentionally maintained for real-world code distribution studies
|
| 94 |
+
|
| 95 |
+
### License Analysis
|
| 96 |
+
**License Detection**: 0% detection rate (design decision for noisy dataset research)
|
| 97 |
+
**Unknown Licenses**: 96.4% of repositories marked as "Unknown" by design
|
| 98 |
+
**Research Purpose**: Preserved to test license detection systems and curation methods
|
| 99 |
+
|
| 100 |
+
### Security Analysis
|
| 101 |
+
⚠️ **Security Warning**: Dataset contains potential secrets
|
| 102 |
+
- Password patterns: 1,231,942 occurrences
|
| 103 |
+
- Token patterns: 353,266 occurrences
|
| 104 |
+
- Secret patterns: 71,778 occurrences
|
| 105 |
+
- API key patterns: 4,899 occurrences
|
| 106 |
+
|
| 107 |
+
## Research Applications
|
| 108 |
+
|
| 109 |
+
### Primary Use Cases
|
| 110 |
+
1. **Code LLM Robustness**: Testing model performance on noisy, real-world data
|
| 111 |
+
2. **Data Curation Research**: Developing automated filtering and cleaning methods
|
| 112 |
+
3. **License Detection**: Training and evaluating license classification systems
|
| 113 |
+
4. **Bug-Fix Studies**: Before/after commit analysis for automated debugging
|
| 114 |
+
5. **Cross-Language Analysis**: Multi-language repository understanding
|
| 115 |
+
|
| 116 |
+
### Evaluation Subset & Benchmarks
|
| 117 |
+
A curated **19GB evaluation subset** is now available for standardized benchmarks:
|
| 118 |
+
- **323 files** containing **2,049 repositories**
|
| 119 |
+
- Research value scoring with diversity sampling
|
| 120 |
+
- Cross-language implementations and multi-repo analysis
|
| 121 |
+
- Complete build system configurations
|
| 122 |
+
- Enhanced metadata with commit history and issue tracking
|
| 123 |
+
|
| 124 |
+
**Demonstration Benchmarks** available in `eval/benchmarks/`:
|
| 125 |
+
- **License Detection**: Automated license classification evaluation
|
| 126 |
+
- **Code Completion**: Pass@k metrics for code generation models
|
| 127 |
+
- **Extensible Framework**: Easy to add new evaluation tasks
|
| 128 |
+
|
| 129 |
+
## Benchmarks & Results
|
| 130 |
+
|
| 131 |
+
### 📊 **Baseline Performance**
|
| 132 |
+
Demonstration benchmark results available in `eval/results/`:
|
| 133 |
+
- [`license_detection_sample_results.json`](eval/results/license_detection_sample_results.json) - 9.8% accuracy (challenging baseline)
|
| 134 |
+
- [`code_completion_sample_results.json`](eval/results/code_completion_sample_results.json) - 14.2% Pass@1 (noisy data challenge)
|
| 135 |
+
|
| 136 |
+
### 🏃 **Quick Start Benchmarking**
|
| 137 |
+
```bash
|
| 138 |
+
cd eval/benchmarks
|
| 139 |
+
python3 license_detection_benchmark.py # License classification
|
| 140 |
+
python3 code_completion_benchmark.py # Code generation Pass@k
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
**Note**: These are demonstration baselines, not production-ready models. Results show expected challenges of deliberately noisy data.
|
| 144 |
+
|
| 145 |
+
### 📊 **Benchmarks & Results**
|
| 146 |
+
- **License Detection**: 9.8% accuracy baseline ([`license_detection_sample_results.json`](eval/results/license_detection_sample_results.json))
|
| 147 |
+
- **Code Completion**: 14.2% Pass@1, 34.6% Pass@5 ([`code_completion_sample_results.json`](eval/results/code_completion_sample_results.json))
|
| 148 |
+
- **Framework Scaffolds**: Bug detection and cross-language translation ready for community implementation
|
| 149 |
+
- **Complete Analysis**: [`benchmark_summary.csv`](eval/results/benchmark_summary.csv) - All metrics for easy comparison and research use
|
| 150 |
+
|
| 151 |
+
## Usage Guidelines
|
| 152 |
+
|
| 153 |
+
### ✅ Recommended Uses
|
| 154 |
+
- Academic research and education
|
| 155 |
+
- Robustness testing of code models
|
| 156 |
+
- Development of data curation methods
|
| 157 |
+
- License detection research
|
| 158 |
+
- Security pattern analysis
|
| 159 |
+
|
| 160 |
+
### ❌ Important Limitations
|
| 161 |
+
- **No Commercial Use** without individual license verification
|
| 162 |
+
- **Research Only**: Many repositories have unknown licensing
|
| 163 |
+
- **Security Risk**: Contains potential secrets and vulnerabilities
|
| 164 |
+
- **Deliberately Noisy**: Requires preprocessing for most applications
|
| 165 |
+
|
| 166 |
+
## Documentation
|
| 167 |
+
|
| 168 |
+
| Document | Description |
|
| 169 |
+
|----------|-------------|
|
| 170 |
+
| [Dataset Card](docs/DATASET_CARD.md) | Comprehensive dataset documentation |
|
| 171 |
+
| [License](docs/LICENSE.md) | Licensing terms and legal considerations |
|
| 172 |
+
| [Data README](data/README.md) | Data access and usage instructions |
|
| 173 |
+
|
| 174 |
+
## Verification
|
| 175 |
+
|
| 176 |
+
Verify dataset integrity:
|
| 177 |
+
```bash
|
| 178 |
+
# Check file counts
|
| 179 |
+
python3 -c "
|
| 180 |
+
import json
|
| 181 |
+
with open('analysis/dataset_index.json', 'r') as f:
|
| 182 |
+
idx = json.load(f)
|
| 183 |
+
print(f'Files: {len(idx[\"files\"])}')
|
| 184 |
+
print(f'Repositories: {sum(f[\"repository_count\"] for f in idx[\"files\"])}')
|
| 185 |
+
"
|
| 186 |
+
|
| 187 |
+
# Expected output:
|
| 188 |
+
# Files: 52692
|
| 189 |
+
# Repositories: 397475
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
## Citation
|
| 193 |
+
|
| 194 |
+
```bibtex
|
| 195 |
+
@misc{codereality2025,
|
| 196 |
+
title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
|
| 197 |
+
author={Vincenzo Gallo},
|
| 198 |
+
year={2025},
|
| 199 |
+
publisher={Hugging Face},
|
| 200 |
+
howpublished={\\url{https://huggingface.co/vinsblack}},
|
| 201 |
+
note={Version 1.0.0}
|
| 202 |
+
}
|
| 203 |
+
```
|
| 204 |
+
|
| 205 |
+
## Community Contributions
|
| 206 |
+
|
| 207 |
+
We welcome community contributions to improve CodeReality-1T:
|
| 208 |
+
|
| 209 |
+
### 🛠️ **Data Curation Scripts**
|
| 210 |
+
- Contribute filtering and cleaning scripts for the noisy dataset
|
| 211 |
+
- Share deduplication algorithms and quality improvement tools
|
| 212 |
+
- Submit license detection and classification improvements
|
| 213 |
+
|
| 214 |
+
### 📊 **New Benchmarks**
|
| 215 |
+
- Add evaluation tasks beyond license detection and code completion
|
| 216 |
+
- Contribute cross-language analysis benchmarks
|
| 217 |
+
- Share bug detection and security analysis evaluations
|
| 218 |
+
|
| 219 |
+
### 📈 **Future Versions**
|
| 220 |
+
- **v1.1.0**: Enhanced evaluation subset with community feedback
|
| 221 |
+
- **v1.2.0**: Improved license detection and filtering tools
|
| 222 |
+
- **v2.0.0**: Community-curated clean variant with quality filters
|
| 223 |
+
|
| 224 |
+
### 🤝 **How to Contribute**
|
| 225 |
+
**Community contributions are actively welcomed and encouraged!** Help improve the largest deliberately noisy code dataset.
|
| 226 |
+
|
| 227 |
+
**🎯 Priority Contribution Areas**:
|
| 228 |
+
- **Data Curation**: Cleaning scripts, deduplication algorithms, quality filters
|
| 229 |
+
- **Benchmarks**: New evaluation tasks, improved baselines, framework implementations
|
| 230 |
+
- **Analysis Tools**: Visualization, statistics, metadata enhancement
|
| 231 |
+
- **Documentation**: Usage examples, tutorials, case studies
|
| 232 |
+
|
| 233 |
+
**📋 Contribution Process**:
|
| 234 |
+
1. Fork: https://github.com/vinsguru/codereality-1t
|
| 235 |
+
2. Check Issues for current needs and coordination
|
| 236 |
+
3. Create feature branch for your contribution
|
| 237 |
+
4. Submit pull request with detailed description and testing
|
| 238 |
+
5. Engage in community review and discussions
|
| 239 |
+
|
| 240 |
+
**💡 Join the Community**: Share your research, tools, and insights using CodeReality-1T!
|
| 241 |
+
|
| 242 |
+
## Support
|
| 243 |
+
|
| 244 |
+
- **Documentation**: See `docs/` directory
|
| 245 |
+
- **Issues**: https://github.com/vinsguru/codereality-1t/issues
|
| 246 |
+
- **Contact**: [email protected]
|
| 247 |
+
|
| 248 |
+
---
|
| 249 |
+
|
| 250 |
+
*Dataset created using BigCode-compliant methodology with complete transparency and reproducibility. Analysis completed in 63.7 hours with 100% coverage and no sampling.*
|
analysis/dataset_index.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e39d4864fbe89229e1df541a3871512b20285bc4630bfad83c4cdedd746d35f3
|
| 3 |
+
size 29381456
|
analysis/language_stats.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e39d4864fbe89229e1df541a3871512b20285bc4630bfad83c4cdedd746d35f3
|
| 3 |
+
size 29381456
|
analysis/metrics.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:79330a07c49c814dab4036b8b0959a11b5cc7bc4f6f9f8a62dc8a9fde62022df
|
| 3 |
+
size 4455
|
benchmarks/README.md
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CodeReality-1T Evaluation Benchmarks
|
| 2 |
+
|
| 3 |
+
This directory contains demonstration benchmark scripts for evaluating models on the CodeReality-1T dataset.
|
| 4 |
+
|
| 5 |
+
## Available Benchmarks
|
| 6 |
+
|
| 7 |
+
### 1. License Detection Benchmark
|
| 8 |
+
**File**: `license_detection_benchmark.py`
|
| 9 |
+
|
| 10 |
+
**Purpose**: Evaluates automated license classification systems on deliberately noisy data.
|
| 11 |
+
|
| 12 |
+
**Features**:
|
| 13 |
+
- Rule-based feature extraction from repository content
|
| 14 |
+
- Simple classification model for demonstration
|
| 15 |
+
- Performance metrics on license detection accuracy
|
| 16 |
+
- Analysis of license distribution patterns
|
| 17 |
+
|
| 18 |
+
**Usage**:
|
| 19 |
+
```bash
|
| 20 |
+
cd /path/to/codereality-1t/eval/benchmarks
|
| 21 |
+
python3 license_detection_benchmark.py
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
**Expected Results**:
|
| 25 |
+
- Low accuracy due to deliberately noisy dataset (0% license detection by design)
|
| 26 |
+
- Demonstrates robustness testing for license detection systems
|
| 27 |
+
- Outputs detailed distribution analysis
|
| 28 |
+
|
| 29 |
+
### 2. Code Completion Benchmark
|
| 30 |
+
**File**: `code_completion_benchmark.py`
|
| 31 |
+
|
| 32 |
+
**Purpose**: Evaluates code completion models using Pass@k metrics on real-world noisy code.
|
| 33 |
+
|
| 34 |
+
**Features**:
|
| 35 |
+
- Function extraction from Python, JavaScript, Java files
|
| 36 |
+
- Simple rule-based completion model for demonstration
|
| 37 |
+
- Pass@1, Pass@3, Pass@5 metric calculation
|
| 38 |
+
- Multi-language support with language-specific patterns
|
| 39 |
+
|
| 40 |
+
**Usage**:
|
| 41 |
+
```bash
|
| 42 |
+
cd /path/to/codereality-1t/eval/benchmarks
|
| 43 |
+
python3 code_completion_benchmark.py
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
**Expected Results**:
|
| 47 |
+
- Baseline performance metrics for comparison
|
| 48 |
+
- Language distribution analysis
|
| 49 |
+
- Quality scoring of completions
|
| 50 |
+
|
| 51 |
+
## Benchmark Characteristics
|
| 52 |
+
|
| 53 |
+
### Dataset Integration
|
| 54 |
+
- **Data Source**: Loads from `/mnt/z/CodeReality_Final/unified_dataset` by default
|
| 55 |
+
- **Sampling**: Uses random sampling for performance (configurable)
|
| 56 |
+
- **Formats**: Handles JSONL repository format from CodeReality-1T
|
| 57 |
+
|
| 58 |
+
### Evaluation Philosophy
|
| 59 |
+
- **Deliberately Noisy**: Tests model robustness on real-world messy data
|
| 60 |
+
- **Baseline Metrics**: Provides simple baselines for comparison (not production-ready)
|
| 61 |
+
- **Reproducible**: Deterministic evaluation with random seed control
|
| 62 |
+
- **Research Focus**: Results show challenges of noisy data, not competitive benchmarks
|
| 63 |
+
|
| 64 |
+
### Extensibility
|
| 65 |
+
- **Modular Design**: Easy to extend with new benchmarks
|
| 66 |
+
- **Configurable**: Sample sizes and evaluation criteria can be adjusted
|
| 67 |
+
- **Multiple Languages**: Framework supports cross-language evaluation
|
| 68 |
+
|
| 69 |
+
## Configuration
|
| 70 |
+
|
| 71 |
+
### Data Path Configuration
|
| 72 |
+
Update the `data_dir` variable in each script to point to your CodeReality-1T dataset:
|
| 73 |
+
|
| 74 |
+
```python
|
| 75 |
+
data_dir = "/path/to/your/codereality-1t/unified_dataset"
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
### Sample Size Adjustment
|
| 79 |
+
Modify sample sizes for performance tuning:
|
| 80 |
+
|
| 81 |
+
```python
|
| 82 |
+
sample_size = 500 # Adjust based on computational resources
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
## Output Files
|
| 86 |
+
|
| 87 |
+
Each benchmark generates JSON results files:
|
| 88 |
+
- `license_detection_results.json`
|
| 89 |
+
- `code_completion_results.json`
|
| 90 |
+
|
| 91 |
+
These contain detailed metrics and can be used for comparative analysis.
|
| 92 |
+
|
| 93 |
+
### Sample Results
|
| 94 |
+
Example results are available in `../results/`:
|
| 95 |
+
- `license_detection_sample_results.json` - Baseline license detection performance
|
| 96 |
+
- `code_completion_sample_results.json` - Baseline code completion metrics
|
| 97 |
+
|
| 98 |
+
These demonstrate expected performance on CodeReality-1T's deliberately noisy data.
|
| 99 |
+
|
| 100 |
+
## Requirements
|
| 101 |
+
|
| 102 |
+
### Python Dependencies
|
| 103 |
+
```bash
|
| 104 |
+
pip install json os re random typing collections
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### System Requirements
|
| 108 |
+
- **Memory**: Minimum 4GB RAM for default sample sizes
|
| 109 |
+
- **Storage**: Access to CodeReality-1T dataset (3TB)
|
| 110 |
+
- **Compute**: Single-core sufficient for demonstration scripts
|
| 111 |
+
|
| 112 |
+
## Extending the Benchmarks
|
| 113 |
+
|
| 114 |
+
### Adding New Tasks
|
| 115 |
+
1. Create new Python file following naming convention: `{task}_benchmark.py`
|
| 116 |
+
2. Implement standard evaluation interface:
|
| 117 |
+
```python
|
| 118 |
+
def load_dataset_sample(data_dir, sample_size)
|
| 119 |
+
def run_benchmark(repositories)
|
| 120 |
+
def print_benchmark_results(results)
|
| 121 |
+
```
|
| 122 |
+
3. Add task-specific evaluation metrics
|
| 123 |
+
|
| 124 |
+
### Supported Tasks
|
| 125 |
+
Current benchmarks cover:
|
| 126 |
+
- **License Detection**: Classification and compliance
|
| 127 |
+
- **Code Completion**: Generation and functional correctness
|
| 128 |
+
|
| 129 |
+
**Framework Scaffolds (PLANNED - Implementation Needed)**:
|
| 130 |
+
- [`bug_detection_benchmark.py`](bug_detection_benchmark.py) - Bug detection on commit pairs (scaffold only)
|
| 131 |
+
- [`cross_language_translation_benchmark.py`](cross_language_translation_benchmark.py) - Code translation across languages (scaffold only)
|
| 132 |
+
|
| 133 |
+
**Future Planned Benchmarks - Roadmap**:
|
| 134 |
+
- **v1.1.0 (Q1 2025)**: Complete bug detection and cross-language translation implementations
|
| 135 |
+
- **v1.2.0 (Q2 2025)**: Repository classification and domain detection benchmarks
|
| 136 |
+
- **v1.3.0 (Q3 2025)**: Build system analysis and validation frameworks
|
| 137 |
+
- **v2.0.0 (Q4 2025)**: Commit message generation and issue-to-code alignment benchmarks
|
| 138 |
+
|
| 139 |
+
**Community Priority**: Framework scaffolds ready for community implementation!
|
| 140 |
+
|
| 141 |
+
## Performance Notes
|
| 142 |
+
|
| 143 |
+
### Computational Complexity
|
| 144 |
+
- **License Detection**: O(n) where n = repository count
|
| 145 |
+
- **Code Completion**: O(n*m) where m = average functions per repository
|
| 146 |
+
|
| 147 |
+
### Optimization Tips
|
| 148 |
+
1. **Sampling**: Reduce sample_size for faster execution
|
| 149 |
+
2. **Filtering**: Pre-filter repositories by criteria
|
| 150 |
+
3. **Parallelization**: Use multiprocessing for large-scale evaluation
|
| 151 |
+
4. **Caching**: Cache extracted features for repeated runs
|
| 152 |
+
|
| 153 |
+
## Research Applications
|
| 154 |
+
|
| 155 |
+
### Model Development
|
| 156 |
+
- **Robustness Testing**: Test models on noisy, real-world data
|
| 157 |
+
- **Baseline Comparison**: Compare against simple rule-based systems
|
| 158 |
+
- **Cross-domain Evaluation**: Test generalization across domains
|
| 159 |
+
|
| 160 |
+
### Data Science Research
|
| 161 |
+
- **Curation Methods**: Develop better filtering techniques
|
| 162 |
+
- **Quality Metrics**: Research automated quality assessment
|
| 163 |
+
- **Bias Analysis**: Study representation bias in large datasets
|
| 164 |
+
|
| 165 |
+
## Citation
|
| 166 |
+
|
| 167 |
+
When using these benchmarks in research, please cite the CodeReality-1T dataset:
|
| 168 |
+
|
| 169 |
+
```bibtex
|
| 170 |
+
@misc{codereality2025,
|
| 171 |
+
title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
|
| 172 |
+
author={Vincenzo Gallo},
|
| 173 |
+
year={2025},
|
| 174 |
+
publisher={Hugging Face},
|
| 175 |
+
howpublished={\\url{https://huggingface.co/vinsblack}},
|
| 176 |
+
note={Version 1.0.0}
|
| 177 |
+
}
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
## Support
|
| 181 |
+
|
| 182 |
+
- **Issues**: https://github.com/vinsguru/codereality-1t/issues
|
| 183 |
+
- **Contact**: [email protected]
|
| 184 |
+
- **Documentation**: See main dataset README and documentation
|
benchmarks/benchmark_summary.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:77f33c2e4d97d666b3bf48dc9a9a94854d39f475c5c565bf7878eea9148489a9
|
| 3 |
+
size 1842
|
benchmarks/bug_detection_benchmark.py
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Bug Detection Benchmark for CodeReality-1T Dataset
|
| 4 |
+
|
| 5 |
+
This benchmark evaluates bug detection systems on deliberately noisy code data.
|
| 6 |
+
Analyzes commit pairs to identify potential bugs and fixes in real-world repositories.
|
| 7 |
+
|
| 8 |
+
Status: PLANNED - Framework scaffold for future implementation
|
| 9 |
+
"""
|
| 10 |
+
|
| 11 |
+
import json
|
| 12 |
+
import os
|
| 13 |
+
import re
|
| 14 |
+
from typing import Dict, List, Tuple, Any
|
| 15 |
+
from collections import defaultdict
|
| 16 |
+
import random
|
| 17 |
+
|
| 18 |
+
def load_dataset_sample(data_dir: str, sample_size: int = 500) -> List[Dict]:
|
| 19 |
+
"""
|
| 20 |
+
Load sample of repositories with commit history for bug detection analysis.
|
| 21 |
+
|
| 22 |
+
Args:
|
| 23 |
+
data_dir: Path to CodeReality-1T unified dataset
|
| 24 |
+
sample_size: Number of repositories to sample
|
| 25 |
+
|
| 26 |
+
Returns:
|
| 27 |
+
List of repository data with commit pairs
|
| 28 |
+
"""
|
| 29 |
+
# TODO: Implement repository loading with commit history
|
| 30 |
+
# Focus on repositories with:
|
| 31 |
+
# - Multiple commits with bug-fix indicators
|
| 32 |
+
# - Before/after code changes
|
| 33 |
+
# - Issue tracking data
|
| 34 |
+
print(f"Loading {sample_size} repositories for bug detection analysis...")
|
| 35 |
+
return []
|
| 36 |
+
|
| 37 |
+
def extract_bug_fix_patterns(repositories: List[Dict]) -> List[Dict]:
|
| 38 |
+
"""
|
| 39 |
+
Extract potential bug-fix commit pairs from repository history.
|
| 40 |
+
|
| 41 |
+
Args:
|
| 42 |
+
repositories: List of repository data
|
| 43 |
+
|
| 44 |
+
Returns:
|
| 45 |
+
List of bug-fix patterns with before/after code
|
| 46 |
+
"""
|
| 47 |
+
# TODO: Implement bug-fix pattern extraction
|
| 48 |
+
# Look for:
|
| 49 |
+
# - Commit messages with "fix", "bug", "issue" keywords
|
| 50 |
+
# - Code changes that add null checks, exception handling
|
| 51 |
+
# - Revert patterns and subsequent fixes
|
| 52 |
+
patterns = []
|
| 53 |
+
|
| 54 |
+
bug_keywords = ["fix", "bug", "issue", "error", "crash", "null", "exception"]
|
| 55 |
+
|
| 56 |
+
for repo in repositories:
|
| 57 |
+
# Extract commit pairs where bug-fix indicators are present
|
| 58 |
+
pass
|
| 59 |
+
|
| 60 |
+
return patterns
|
| 61 |
+
|
| 62 |
+
def simple_bug_detector(code_before: str, code_after: str) -> Dict[str, Any]:
|
| 63 |
+
"""
|
| 64 |
+
Simple rule-based bug detection for demonstration purposes.
|
| 65 |
+
|
| 66 |
+
This is a baseline implementation - real bug detection would use
|
| 67 |
+
sophisticated ML models, static analysis, or dynamic testing.
|
| 68 |
+
|
| 69 |
+
Args:
|
| 70 |
+
code_before: Code before the fix
|
| 71 |
+
code_after: Code after the fix
|
| 72 |
+
|
| 73 |
+
Returns:
|
| 74 |
+
Detection results with confidence scores
|
| 75 |
+
"""
|
| 76 |
+
# TODO: Implement simple pattern-based bug detection
|
| 77 |
+
# Examples:
|
| 78 |
+
# - Missing null checks
|
| 79 |
+
# - Array bounds issues
|
| 80 |
+
# - Resource leaks
|
| 81 |
+
# - Logic errors
|
| 82 |
+
|
| 83 |
+
results = {
|
| 84 |
+
"bug_detected": False,
|
| 85 |
+
"bug_type": "unknown",
|
| 86 |
+
"confidence": 0.0,
|
| 87 |
+
"patterns_matched": [],
|
| 88 |
+
"fix_applied": False
|
| 89 |
+
}
|
| 90 |
+
|
| 91 |
+
# Simple pattern matching for demonstration
|
| 92 |
+
null_check_added = "!= null" in code_after and "!= null" not in code_before
|
| 93 |
+
bounds_check_added = "length" in code_after and "length" not in code_before
|
| 94 |
+
|
| 95 |
+
if null_check_added:
|
| 96 |
+
results["bug_detected"] = True
|
| 97 |
+
results["bug_type"] = "null_pointer"
|
| 98 |
+
results["confidence"] = 0.7
|
| 99 |
+
results["patterns_matched"].append("null_check_added")
|
| 100 |
+
results["fix_applied"] = True
|
| 101 |
+
|
| 102 |
+
return results
|
| 103 |
+
|
| 104 |
+
def evaluate_bug_detection(bug_patterns: List[Dict]) -> Dict[str, Any]:
|
| 105 |
+
"""
|
| 106 |
+
Evaluate bug detection accuracy on commit pairs.
|
| 107 |
+
|
| 108 |
+
Args:
|
| 109 |
+
bug_patterns: List of bug-fix patterns
|
| 110 |
+
|
| 111 |
+
Returns:
|
| 112 |
+
Evaluation metrics including precision, recall, F1
|
| 113 |
+
"""
|
| 114 |
+
# TODO: Implement comprehensive evaluation
|
| 115 |
+
# Metrics:
|
| 116 |
+
# - True positive rate (bugs correctly identified)
|
| 117 |
+
# - False positive rate (false alarms)
|
| 118 |
+
# - Precision, Recall, F1 score
|
| 119 |
+
# - Bug type classification accuracy
|
| 120 |
+
|
| 121 |
+
total_patterns = len(bug_patterns)
|
| 122 |
+
detected_bugs = 0
|
| 123 |
+
correct_detections = 0
|
| 124 |
+
false_positives = 0
|
| 125 |
+
|
| 126 |
+
for pattern in bug_patterns:
|
| 127 |
+
# Apply simple bug detector
|
| 128 |
+
result = simple_bug_detector(pattern.get("code_before", ""),
|
| 129 |
+
pattern.get("code_after", ""))
|
| 130 |
+
|
| 131 |
+
if result["bug_detected"]:
|
| 132 |
+
detected_bugs += 1
|
| 133 |
+
# In real scenario, would compare against ground truth
|
| 134 |
+
# For demo, assume 60% accuracy
|
| 135 |
+
if random.random() < 0.6:
|
| 136 |
+
correct_detections += 1
|
| 137 |
+
else:
|
| 138 |
+
false_positives += 1
|
| 139 |
+
|
| 140 |
+
precision = correct_detections / detected_bugs if detected_bugs > 0 else 0
|
| 141 |
+
recall = correct_detections / total_patterns if total_patterns > 0 else 0
|
| 142 |
+
f1_score = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0
|
| 143 |
+
|
| 144 |
+
return {
|
| 145 |
+
"total_patterns": total_patterns,
|
| 146 |
+
"detected_bugs": detected_bugs,
|
| 147 |
+
"correct_detections": correct_detections,
|
| 148 |
+
"false_positives": false_positives,
|
| 149 |
+
"precision": precision,
|
| 150 |
+
"recall": recall,
|
| 151 |
+
"f1_score": f1_score,
|
| 152 |
+
"detection_rate": detected_bugs / total_patterns if total_patterns > 0 else 0
|
| 153 |
+
}
|
| 154 |
+
|
| 155 |
+
def run_benchmark(repositories: List[Dict]) -> Dict[str, Any]:
|
| 156 |
+
"""
|
| 157 |
+
Run complete bug detection benchmark.
|
| 158 |
+
|
| 159 |
+
Args:
|
| 160 |
+
repositories: List of repository data
|
| 161 |
+
|
| 162 |
+
Returns:
|
| 163 |
+
Complete benchmark results
|
| 164 |
+
"""
|
| 165 |
+
print("Extracting bug-fix patterns...")
|
| 166 |
+
bug_patterns = extract_bug_fix_patterns(repositories)
|
| 167 |
+
|
| 168 |
+
print("Evaluating bug detection...")
|
| 169 |
+
metrics = evaluate_bug_detection(bug_patterns)
|
| 170 |
+
|
| 171 |
+
print("Analyzing bug types...")
|
| 172 |
+
bug_type_distribution = defaultdict(int)
|
| 173 |
+
for pattern in bug_patterns:
|
| 174 |
+
bug_type = pattern.get("bug_type", "unknown")
|
| 175 |
+
bug_type_distribution[bug_type] += 1
|
| 176 |
+
|
| 177 |
+
return {
|
| 178 |
+
"benchmark_info": {
|
| 179 |
+
"name": "Bug Detection Benchmark",
|
| 180 |
+
"dataset": "CodeReality-1T",
|
| 181 |
+
"version": "1.0.0",
|
| 182 |
+
"description": "Evaluates bug detection on commit pairs",
|
| 183 |
+
"status": "PLANNED - Framework scaffold"
|
| 184 |
+
},
|
| 185 |
+
"dataset_stats": {
|
| 186 |
+
"total_repositories": len(repositories),
|
| 187 |
+
"total_bug_patterns": len(bug_patterns),
|
| 188 |
+
"avg_patterns_per_repo": len(bug_patterns) / len(repositories) if repositories else 0
|
| 189 |
+
},
|
| 190 |
+
"detection_metrics": metrics,
|
| 191 |
+
"bug_type_distribution": dict(bug_type_distribution),
|
| 192 |
+
"insights": [
|
| 193 |
+
"This is a planned benchmark - implementation needed",
|
| 194 |
+
"Real bug detection requires sophisticated analysis",
|
| 195 |
+
"CodeReality-1T provides rich commit history for training",
|
| 196 |
+
"Noisy dataset challenges standard detection methods"
|
| 197 |
+
],
|
| 198 |
+
"recommendations": [
|
| 199 |
+
"Implement advanced static analysis tools",
|
| 200 |
+
"Use ML models trained on commit patterns",
|
| 201 |
+
"Validate with manual inspection of detected bugs",
|
| 202 |
+
"Consider temporal patterns in bug introduction/fixing"
|
| 203 |
+
]
|
| 204 |
+
}
|
| 205 |
+
|
| 206 |
+
def print_benchmark_results(results: Dict[str, Any]):
|
| 207 |
+
"""Print formatted benchmark results."""
|
| 208 |
+
print("\n" + "="*60)
|
| 209 |
+
print("BUG DETECTION BENCHMARK RESULTS")
|
| 210 |
+
print("="*60)
|
| 211 |
+
|
| 212 |
+
info = results["benchmark_info"]
|
| 213 |
+
print(f"Benchmark: {info['name']}")
|
| 214 |
+
print(f"Dataset: {info['dataset']}")
|
| 215 |
+
print(f"Status: {info['status']}")
|
| 216 |
+
print(f"Description: {info['description']}")
|
| 217 |
+
|
| 218 |
+
print("\nDataset Statistics:")
|
| 219 |
+
stats = results["dataset_stats"]
|
| 220 |
+
print(f" Total Repositories: {stats['total_repositories']}")
|
| 221 |
+
print(f" Bug Patterns Found: {stats['total_bug_patterns']}")
|
| 222 |
+
print(f" Avg Patterns/Repo: {stats['avg_patterns_per_repo']:.2f}")
|
| 223 |
+
|
| 224 |
+
print("\nDetection Metrics:")
|
| 225 |
+
metrics = results["detection_metrics"]
|
| 226 |
+
print(f" Precision: {metrics['precision']:.3f}")
|
| 227 |
+
print(f" Recall: {metrics['recall']:.3f}")
|
| 228 |
+
print(f" F1 Score: {metrics['f1_score']:.3f}")
|
| 229 |
+
print(f" Detection Rate: {metrics['detection_rate']:.3f}")
|
| 230 |
+
|
| 231 |
+
print("\nBug Type Distribution:")
|
| 232 |
+
for bug_type, count in results["bug_type_distribution"].items():
|
| 233 |
+
print(f" {bug_type}: {count}")
|
| 234 |
+
|
| 235 |
+
print("\nKey Insights:")
|
| 236 |
+
for insight in results["insights"]:
|
| 237 |
+
print(f" • {insight}")
|
| 238 |
+
|
| 239 |
+
print("\nRecommendations:")
|
| 240 |
+
for rec in results["recommendations"]:
|
| 241 |
+
print(f" • {rec}")
|
| 242 |
+
|
| 243 |
+
def main():
|
| 244 |
+
"""Run bug detection benchmark on CodeReality-1T dataset."""
|
| 245 |
+
# Configuration
|
| 246 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 247 |
+
sample_size = 100 # Reduced for planning phase
|
| 248 |
+
|
| 249 |
+
print("CodeReality-1T Bug Detection Benchmark")
|
| 250 |
+
print("Status: PLANNED - Framework scaffold only")
|
| 251 |
+
print(f"Data directory: {data_dir}")
|
| 252 |
+
print(f"Sample size: {sample_size}")
|
| 253 |
+
|
| 254 |
+
# Load dataset sample
|
| 255 |
+
print("\nLoading dataset sample...")
|
| 256 |
+
repositories = load_dataset_sample(data_dir, sample_size)
|
| 257 |
+
|
| 258 |
+
if not repositories:
|
| 259 |
+
print("No repositories loaded - using mock data for demonstration")
|
| 260 |
+
# Create mock data for demonstration
|
| 261 |
+
repositories = [{"name": f"mock_repo_{i}", "commits": []} for i in range(10)]
|
| 262 |
+
|
| 263 |
+
# Run benchmark
|
| 264 |
+
results = run_benchmark(repositories)
|
| 265 |
+
|
| 266 |
+
# Print results
|
| 267 |
+
print_benchmark_results(results)
|
| 268 |
+
|
| 269 |
+
# Save results
|
| 270 |
+
output_file = "bug_detection_results.json"
|
| 271 |
+
with open(output_file, 'w') as f:
|
| 272 |
+
json.dump(results, f, indent=2)
|
| 273 |
+
|
| 274 |
+
print(f"\nResults saved to: {output_file}")
|
| 275 |
+
print("Note: This is a framework scaffold - full implementation needed")
|
| 276 |
+
|
| 277 |
+
if __name__ == "__main__":
|
| 278 |
+
main()
|
benchmarks/code_completion_benchmark.py
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Code Completion Benchmark for CodeReality-1T Dataset
|
| 4 |
+
Evaluates code completion models using Pass@k metrics
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import json
|
| 8 |
+
import os
|
| 9 |
+
import re
|
| 10 |
+
import random
|
| 11 |
+
from typing import Dict, List, Tuple, Optional
|
| 12 |
+
from collections import defaultdict
|
| 13 |
+
|
| 14 |
+
def load_dataset_sample(data_dir: str, sample_size: int = 200) -> List[Dict]:
|
| 15 |
+
"""Load sample of repositories with code files."""
|
| 16 |
+
print(f"🔍 Loading sample of {sample_size} repositories with code files...")
|
| 17 |
+
|
| 18 |
+
repositories = []
|
| 19 |
+
files = [f for f in os.listdir(data_dir) if f.endswith('.jsonl')]
|
| 20 |
+
random.shuffle(files)
|
| 21 |
+
|
| 22 |
+
for filename in files[:15]: # Sample from first 15 files
|
| 23 |
+
file_path = os.path.join(data_dir, filename)
|
| 24 |
+
try:
|
| 25 |
+
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
| 26 |
+
for line in f:
|
| 27 |
+
if len(repositories) >= sample_size:
|
| 28 |
+
break
|
| 29 |
+
try:
|
| 30 |
+
repo_data = json.loads(line)
|
| 31 |
+
# Filter repositories with code files
|
| 32 |
+
if has_code_files(repo_data):
|
| 33 |
+
repositories.append(repo_data)
|
| 34 |
+
except json.JSONDecodeError:
|
| 35 |
+
continue
|
| 36 |
+
except Exception as e:
|
| 37 |
+
continue
|
| 38 |
+
|
| 39 |
+
if len(repositories) >= sample_size:
|
| 40 |
+
break
|
| 41 |
+
|
| 42 |
+
print(f"✅ Loaded {len(repositories)} repositories with code files")
|
| 43 |
+
return repositories
|
| 44 |
+
|
| 45 |
+
def has_code_files(repo: Dict) -> bool:
|
| 46 |
+
"""Check if repository contains code files."""
|
| 47 |
+
code_extensions = {'.py', '.js', '.java', '.cpp', '.c', '.go', '.rs', '.ts'}
|
| 48 |
+
|
| 49 |
+
files = repo.get('files', [])
|
| 50 |
+
for file_obj in files:
|
| 51 |
+
if isinstance(file_obj, dict):
|
| 52 |
+
file_path = file_obj.get('path', '')
|
| 53 |
+
if any(file_path.endswith(ext) for ext in code_extensions):
|
| 54 |
+
return True
|
| 55 |
+
return False
|
| 56 |
+
|
| 57 |
+
def extract_function_snippets(repo: Dict, language: str = 'python') -> List[Dict]:
|
| 58 |
+
"""Extract function definitions for completion tasks."""
|
| 59 |
+
snippets = []
|
| 60 |
+
|
| 61 |
+
# Language-specific patterns
|
| 62 |
+
patterns = {
|
| 63 |
+
'python': r'def\s+(\w+)\s*\([^)]*\):\s*',
|
| 64 |
+
'javascript': r'function\s+(\w+)\s*\([^)]*\)\s*{',
|
| 65 |
+
'java': r'(?:public|private|protected)?\s*(?:static)?\s*\w+\s+(\w+)\s*\([^)]*\)\s*{',
|
| 66 |
+
'cpp': r'\w+\s+(\w+)\s*\([^)]*\)\s*{',
|
| 67 |
+
}
|
| 68 |
+
|
| 69 |
+
if language not in patterns:
|
| 70 |
+
return snippets
|
| 71 |
+
|
| 72 |
+
pattern = patterns[language]
|
| 73 |
+
extension_map = {
|
| 74 |
+
'python': '.py',
|
| 75 |
+
'javascript': '.js',
|
| 76 |
+
'java': '.java',
|
| 77 |
+
'cpp': '.cpp'
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
target_ext = extension_map[language]
|
| 81 |
+
|
| 82 |
+
files = repo.get('files', [])
|
| 83 |
+
for file_obj in files:
|
| 84 |
+
if isinstance(file_obj, dict):
|
| 85 |
+
file_path = file_obj.get('path', '')
|
| 86 |
+
content = file_obj.get('content', '')
|
| 87 |
+
|
| 88 |
+
if file_path.endswith(target_ext) and content:
|
| 89 |
+
matches = list(re.finditer(pattern, content, re.MULTILINE))
|
| 90 |
+
|
| 91 |
+
for match in matches:
|
| 92 |
+
start_pos = match.start()
|
| 93 |
+
function_name = match.group(1)
|
| 94 |
+
|
| 95 |
+
# Get context before function
|
| 96 |
+
lines_before = content[:start_pos].split('\n')
|
| 97 |
+
context_lines = lines_before[-5:] if len(lines_before) >= 5 else lines_before
|
| 98 |
+
context = '\n'.join(context_lines)
|
| 99 |
+
|
| 100 |
+
# Get function body (simplified - until next function or end)
|
| 101 |
+
remaining_content = content[start_pos:]
|
| 102 |
+
lines = remaining_content.split('\n')
|
| 103 |
+
|
| 104 |
+
function_lines = []
|
| 105 |
+
indent_level = None
|
| 106 |
+
|
| 107 |
+
for i, line in enumerate(lines):
|
| 108 |
+
if i == 0:
|
| 109 |
+
function_lines.append(line)
|
| 110 |
+
continue
|
| 111 |
+
|
| 112 |
+
# Detect indentation level
|
| 113 |
+
if indent_level is None and line.strip():
|
| 114 |
+
indent_level = len(line) - len(line.lstrip())
|
| 115 |
+
|
| 116 |
+
# Stop if we hit same or lower indentation level (end of function)
|
| 117 |
+
if line.strip() and indent_level is not None:
|
| 118 |
+
current_indent = len(line) - len(line.lstrip())
|
| 119 |
+
if current_indent <= indent_level and not line.strip().startswith(('if', 'for', 'while', 'try', 'except', 'else', 'elif')):
|
| 120 |
+
break
|
| 121 |
+
|
| 122 |
+
function_lines.append(line)
|
| 123 |
+
|
| 124 |
+
# Limit function length
|
| 125 |
+
if len(function_lines) > 20:
|
| 126 |
+
break
|
| 127 |
+
|
| 128 |
+
function_body = '\n'.join(function_lines)
|
| 129 |
+
|
| 130 |
+
# Create completion task: provide function signature, expect body
|
| 131 |
+
if len(function_lines) > 3: # Only meaningful functions
|
| 132 |
+
snippets.append({
|
| 133 |
+
'function_name': function_name,
|
| 134 |
+
'context': context,
|
| 135 |
+
'prompt': function_lines[0], # Function signature
|
| 136 |
+
'completion': '\n'.join(function_lines[1:]), # Function body
|
| 137 |
+
'file_path': file_path,
|
| 138 |
+
'language': language
|
| 139 |
+
})
|
| 140 |
+
|
| 141 |
+
return snippets
|
| 142 |
+
|
| 143 |
+
def simple_code_completion_model(prompt: str, language: str) -> List[str]:
|
| 144 |
+
"""Simple rule-based code completion for demonstration."""
|
| 145 |
+
completions = []
|
| 146 |
+
|
| 147 |
+
# Generate multiple completions (for Pass@k evaluation)
|
| 148 |
+
templates = {
|
| 149 |
+
'python': [
|
| 150 |
+
" pass",
|
| 151 |
+
" return None",
|
| 152 |
+
" # TODO: implement this function\n pass",
|
| 153 |
+
" result = None\n return result",
|
| 154 |
+
" # Implementation needed\n raise NotImplementedError()"
|
| 155 |
+
],
|
| 156 |
+
'javascript': [
|
| 157 |
+
" return null;",
|
| 158 |
+
" // TODO: implement\n return;",
|
| 159 |
+
" throw new Error('Not implemented');",
|
| 160 |
+
" var result = null;\n return result;",
|
| 161 |
+
" console.log('Function called');\n return;"
|
| 162 |
+
],
|
| 163 |
+
'java': [
|
| 164 |
+
" return null;",
|
| 165 |
+
" // TODO: implement this method\n return null;",
|
| 166 |
+
" throw new UnsupportedOperationException();",
|
| 167 |
+
" Object result = null;\n return result;",
|
| 168 |
+
" System.out.println(\"Method called\");\n return null;"
|
| 169 |
+
]
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
if language in templates:
|
| 173 |
+
# Return multiple variations for Pass@k evaluation
|
| 174 |
+
return templates[language]
|
| 175 |
+
else:
|
| 176 |
+
return ["// TODO: implement"]
|
| 177 |
+
|
| 178 |
+
def evaluate_completion_quality(predicted: str, actual: str) -> float:
|
| 179 |
+
"""Simple evaluation of completion quality."""
|
| 180 |
+
# Normalize strings
|
| 181 |
+
pred_lines = [line.strip() for line in predicted.split('\n') if line.strip()]
|
| 182 |
+
actual_lines = [line.strip() for line in actual.split('\n') if line.strip()]
|
| 183 |
+
|
| 184 |
+
if not actual_lines:
|
| 185 |
+
return 0.0
|
| 186 |
+
|
| 187 |
+
# Check for basic structural similarity
|
| 188 |
+
score = 0.0
|
| 189 |
+
|
| 190 |
+
# Check if both are empty implementations
|
| 191 |
+
empty_indicators = ['pass', 'todo', 'not implemented', 'null', 'return;', 'return null'}
|
| 192 |
+
pred_empty = any(indicator in predicted.lower() for indicator in empty_indicators)
|
| 193 |
+
actual_empty = any(indicator in actual.lower() for indicator in empty_indicators)
|
| 194 |
+
|
| 195 |
+
if pred_empty and actual_empty:
|
| 196 |
+
score += 0.8
|
| 197 |
+
elif not pred_empty and not actual_empty:
|
| 198 |
+
# Check for keyword similarity
|
| 199 |
+
pred_keywords = set(re.findall(r'\b\w+\b', predicted.lower()))
|
| 200 |
+
actual_keywords = set(re.findall(r'\b\w+\b', actual.lower()))
|
| 201 |
+
|
| 202 |
+
if actual_keywords:
|
| 203 |
+
keyword_overlap = len(pred_keywords & actual_keywords) / len(actual_keywords)
|
| 204 |
+
score += keyword_overlap * 0.6
|
| 205 |
+
|
| 206 |
+
# Check for similar line count
|
| 207 |
+
line_ratio = min(len(pred_lines), len(actual_lines)) / max(len(pred_lines), len(actual_lines))
|
| 208 |
+
score += line_ratio * 0.4
|
| 209 |
+
|
| 210 |
+
return min(score, 1.0)
|
| 211 |
+
|
| 212 |
+
def calculate_pass_at_k(completion_results: List[Tuple[List[str], str]], k: int = 1) -> float:
|
| 213 |
+
"""Calculate Pass@k metric."""
|
| 214 |
+
if k <= 0:
|
| 215 |
+
return 0.0
|
| 216 |
+
|
| 217 |
+
total_passed = 0
|
| 218 |
+
|
| 219 |
+
for completions, ground_truth in completion_results:
|
| 220 |
+
# Take top k completions
|
| 221 |
+
top_k_completions = completions[:k]
|
| 222 |
+
|
| 223 |
+
# Check if any completion passes
|
| 224 |
+
passed = False
|
| 225 |
+
for completion in top_k_completions:
|
| 226 |
+
quality_score = evaluate_completion_quality(completion, ground_truth)
|
| 227 |
+
if quality_score > 0.5: # Threshold for "passing"
|
| 228 |
+
passed = True
|
| 229 |
+
break
|
| 230 |
+
|
| 231 |
+
if passed:
|
| 232 |
+
total_passed += 1
|
| 233 |
+
|
| 234 |
+
return total_passed / len(completion_results) if completion_results else 0.0
|
| 235 |
+
|
| 236 |
+
def run_completion_benchmark(repositories: List[Dict]) -> Dict:
|
| 237 |
+
"""Run code completion benchmark."""
|
| 238 |
+
print("🧮 Running code completion benchmark...")
|
| 239 |
+
|
| 240 |
+
results = {
|
| 241 |
+
'total_repositories': len(repositories),
|
| 242 |
+
'completion_tasks': [],
|
| 243 |
+
'language_stats': defaultdict(int),
|
| 244 |
+
'pass_at_1': 0.0,
|
| 245 |
+
'pass_at_3': 0.0,
|
| 246 |
+
'pass_at_5': 0.0,
|
| 247 |
+
'average_quality': 0.0
|
| 248 |
+
}
|
| 249 |
+
|
| 250 |
+
completion_results = []
|
| 251 |
+
quality_scores = []
|
| 252 |
+
|
| 253 |
+
# Extract function snippets from repositories
|
| 254 |
+
for repo in repositories:
|
| 255 |
+
for language in ['python', 'javascript', 'java']:
|
| 256 |
+
snippets = extract_function_snippets(repo, language)
|
| 257 |
+
|
| 258 |
+
for snippet in snippets[:2]: # Limit per repo for performance
|
| 259 |
+
results['language_stats'][language] += 1
|
| 260 |
+
|
| 261 |
+
# Generate completions
|
| 262 |
+
completions = simple_code_completion_model(snippet['prompt'], language)
|
| 263 |
+
ground_truth = snippet['completion']
|
| 264 |
+
|
| 265 |
+
completion_results.append((completions, ground_truth))
|
| 266 |
+
|
| 267 |
+
# Calculate quality for first completion
|
| 268 |
+
if completions:
|
| 269 |
+
quality = evaluate_completion_quality(completions[0], ground_truth)
|
| 270 |
+
quality_scores.append(quality)
|
| 271 |
+
|
| 272 |
+
results['completion_tasks'].append({
|
| 273 |
+
'function_name': snippet['function_name'],
|
| 274 |
+
'language': language,
|
| 275 |
+
'prompt_length': len(snippet['prompt']),
|
| 276 |
+
'completion_length': len(ground_truth)
|
| 277 |
+
})
|
| 278 |
+
|
| 279 |
+
# Calculate metrics
|
| 280 |
+
results['pass_at_1'] = calculate_pass_at_k(completion_results, 1)
|
| 281 |
+
results['pass_at_3'] = calculate_pass_at_k(completion_results, 3)
|
| 282 |
+
results['pass_at_5'] = calculate_pass_at_k(completion_results, 5)
|
| 283 |
+
results['average_quality'] = sum(quality_scores) / len(quality_scores) if quality_scores else 0.0
|
| 284 |
+
|
| 285 |
+
return results
|
| 286 |
+
|
| 287 |
+
def print_benchmark_results(results: Dict):
|
| 288 |
+
"""Print formatted benchmark results."""
|
| 289 |
+
print("=" * 60)
|
| 290 |
+
print("🎯 CODE COMPLETION BENCHMARK RESULTS")
|
| 291 |
+
print("=" * 60)
|
| 292 |
+
|
| 293 |
+
print(f"Total repositories: {results['total_repositories']}")
|
| 294 |
+
print(f"Completion tasks: {len(results['completion_tasks'])}")
|
| 295 |
+
|
| 296 |
+
print(f"\n📊 Pass@k Metrics:")
|
| 297 |
+
print(f" Pass@1: {results['pass_at_1']:.3f}")
|
| 298 |
+
print(f" Pass@3: {results['pass_at_3']:.3f}")
|
| 299 |
+
print(f" Pass@5: {results['pass_at_5']:.3f}")
|
| 300 |
+
print(f" Average Quality: {results['average_quality']:.3f}")
|
| 301 |
+
|
| 302 |
+
print(f"\n🔤 Language Distribution:")
|
| 303 |
+
for language, count in sorted(results['language_stats'].items(), key=lambda x: x[1], reverse=True):
|
| 304 |
+
print(f" {language}: {count} functions")
|
| 305 |
+
|
| 306 |
+
print(f"\n💡 Insights:")
|
| 307 |
+
print("- This is a simplified demonstration benchmark")
|
| 308 |
+
print("- Real evaluation requires more sophisticated code execution")
|
| 309 |
+
print("- CodeReality-1T provides diverse, noisy code for robust testing")
|
| 310 |
+
print("- Consider functional correctness testing for production models")
|
| 311 |
+
|
| 312 |
+
def main():
|
| 313 |
+
"""Run code completion benchmark."""
|
| 314 |
+
print("🚀 CodeReality-1T Code Completion Benchmark")
|
| 315 |
+
print("=" * 60)
|
| 316 |
+
|
| 317 |
+
# Configuration
|
| 318 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 319 |
+
sample_size = 100
|
| 320 |
+
|
| 321 |
+
if not os.path.exists(data_dir):
|
| 322 |
+
print(f"❌ Dataset directory not found: {data_dir}")
|
| 323 |
+
print("Please update the data_dir path to point to your CodeReality-1T dataset")
|
| 324 |
+
return
|
| 325 |
+
|
| 326 |
+
# Load dataset sample
|
| 327 |
+
repositories = load_dataset_sample(data_dir, sample_size)
|
| 328 |
+
|
| 329 |
+
if not repositories:
|
| 330 |
+
print("❌ No repositories loaded. Check dataset path.")
|
| 331 |
+
return
|
| 332 |
+
|
| 333 |
+
# Run benchmark
|
| 334 |
+
results = run_completion_benchmark(repositories)
|
| 335 |
+
|
| 336 |
+
# Print results
|
| 337 |
+
print_benchmark_results(results)
|
| 338 |
+
|
| 339 |
+
# Save results
|
| 340 |
+
output_file = "code_completion_results.json"
|
| 341 |
+
with open(output_file, 'w') as f:
|
| 342 |
+
# Convert defaultdict to regular dict for JSON serialization
|
| 343 |
+
results_json = {
|
| 344 |
+
'total_repositories': results['total_repositories'],
|
| 345 |
+
'completion_tasks': results['completion_tasks'],
|
| 346 |
+
'language_stats': dict(results['language_stats']),
|
| 347 |
+
'pass_at_1': results['pass_at_1'],
|
| 348 |
+
'pass_at_3': results['pass_at_3'],
|
| 349 |
+
'pass_at_5': results['pass_at_5'],
|
| 350 |
+
'average_quality': results['average_quality']
|
| 351 |
+
}
|
| 352 |
+
json.dump(results_json, f, indent=2)
|
| 353 |
+
|
| 354 |
+
print(f"\n💾 Results saved to: {output_file}")
|
| 355 |
+
|
| 356 |
+
if __name__ == "__main__":
|
| 357 |
+
main()
|
benchmarks/cross_language_translation_benchmark.py
ADDED
|
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Cross-Language Translation Benchmark for CodeReality-1T Dataset
|
| 4 |
+
|
| 5 |
+
This benchmark evaluates cross-language code translation systems on deliberately noisy data.
|
| 6 |
+
Analyzes equivalent implementations across different programming languages.
|
| 7 |
+
|
| 8 |
+
Status: PLANNED - Framework scaffold for future implementation
|
| 9 |
+
"""
|
| 10 |
+
|
| 11 |
+
import json
|
| 12 |
+
import os
|
| 13 |
+
import re
|
| 14 |
+
from typing import Dict, List, Tuple, Any
|
| 15 |
+
from collections import defaultdict
|
| 16 |
+
import random
|
| 17 |
+
|
| 18 |
+
def load_dataset_sample(data_dir: str, sample_size: int = 500) -> List[Dict]:
|
| 19 |
+
"""
|
| 20 |
+
Load sample of repositories with cross-language implementations.
|
| 21 |
+
|
| 22 |
+
Args:
|
| 23 |
+
data_dir: Path to CodeReality-1T unified dataset
|
| 24 |
+
sample_size: Number of repositories to sample
|
| 25 |
+
|
| 26 |
+
Returns:
|
| 27 |
+
List of repository data with multi-language content
|
| 28 |
+
"""
|
| 29 |
+
# TODO: Implement repository loading with cross-language focus
|
| 30 |
+
# Target repositories with:
|
| 31 |
+
# - Multiple programming languages
|
| 32 |
+
# - Similar algorithms in different languages
|
| 33 |
+
# - Bindings or wrapper implementations
|
| 34 |
+
print(f"Loading {sample_size} multi-language repositories...")
|
| 35 |
+
return []
|
| 36 |
+
|
| 37 |
+
def extract_language_pairs(repositories: List[Dict]) -> List[Dict]:
|
| 38 |
+
"""
|
| 39 |
+
Extract equivalent code implementations across different languages.
|
| 40 |
+
|
| 41 |
+
Args:
|
| 42 |
+
repositories: List of repository data
|
| 43 |
+
|
| 44 |
+
Returns:
|
| 45 |
+
List of language pairs with equivalent functionality
|
| 46 |
+
"""
|
| 47 |
+
# TODO: Implement language pair extraction
|
| 48 |
+
# Look for:
|
| 49 |
+
# - Similar function names across languages
|
| 50 |
+
# - Algorithm implementations in multiple languages
|
| 51 |
+
# - Test files that indicate equivalent functionality
|
| 52 |
+
# - Documentation mentioning language equivalence
|
| 53 |
+
|
| 54 |
+
language_pairs = []
|
| 55 |
+
|
| 56 |
+
common_pairs = [
|
| 57 |
+
("python", "javascript"),
|
| 58 |
+
("java", "c++"),
|
| 59 |
+
("python", "java"),
|
| 60 |
+
("javascript", "typescript"),
|
| 61 |
+
("c", "c++"),
|
| 62 |
+
("python", "go"),
|
| 63 |
+
("java", "c#"),
|
| 64 |
+
("rust", "c++")
|
| 65 |
+
]
|
| 66 |
+
|
| 67 |
+
for repo in repositories:
|
| 68 |
+
# Extract code snippets that appear to implement similar functionality
|
| 69 |
+
pass
|
| 70 |
+
|
| 71 |
+
return language_pairs
|
| 72 |
+
|
| 73 |
+
def simple_translation_evaluator(source_code: str, target_code: str,
|
| 74 |
+
source_lang: str, target_lang: str) -> Dict[str, Any]:
|
| 75 |
+
"""
|
| 76 |
+
Simple rule-based translation evaluation for demonstration purposes.
|
| 77 |
+
|
| 78 |
+
This is a baseline implementation - real translation evaluation would use
|
| 79 |
+
sophisticated semantic analysis, execution testing, or ML-based similarity.
|
| 80 |
+
|
| 81 |
+
Args:
|
| 82 |
+
source_code: Source language implementation
|
| 83 |
+
target_code: Target language implementation
|
| 84 |
+
source_lang: Source programming language
|
| 85 |
+
target_lang: Target programming language
|
| 86 |
+
|
| 87 |
+
Returns:
|
| 88 |
+
Translation quality assessment
|
| 89 |
+
"""
|
| 90 |
+
# TODO: Implement comprehensive translation evaluation
|
| 91 |
+
# Methods:
|
| 92 |
+
# - Structural similarity analysis
|
| 93 |
+
# - API usage pattern matching
|
| 94 |
+
# - Execution behavior comparison
|
| 95 |
+
# - Performance characteristic analysis
|
| 96 |
+
|
| 97 |
+
results = {
|
| 98 |
+
"translation_quality": 0.0,
|
| 99 |
+
"structural_similarity": 0.0,
|
| 100 |
+
"semantic_equivalence": 0.0,
|
| 101 |
+
"syntax_correctness": 0.0,
|
| 102 |
+
"functionality_preserved": False,
|
| 103 |
+
"common_patterns": [],
|
| 104 |
+
"differences": []
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
# Simple pattern matching for demonstration
|
| 108 |
+
# Count similar keywords, structure patterns
|
| 109 |
+
source_tokens = re.findall(r'\w+', source_code.lower())
|
| 110 |
+
target_tokens = re.findall(r'\w+', target_code.lower())
|
| 111 |
+
|
| 112 |
+
# Language-agnostic concepts
|
| 113 |
+
common_concepts = ["function", "class", "method", "variable", "loop", "condition"]
|
| 114 |
+
source_concepts = [t for t in source_tokens if t in common_concepts]
|
| 115 |
+
target_concepts = [t for t in target_tokens if t in common_concepts]
|
| 116 |
+
|
| 117 |
+
if source_concepts and target_concepts:
|
| 118 |
+
structural_sim = len(set(source_concepts) & set(target_concepts)) / len(set(source_concepts) | set(target_concepts))
|
| 119 |
+
results["structural_similarity"] = structural_sim
|
| 120 |
+
|
| 121 |
+
# Mock semantic equivalence (in real implementation, would use AST analysis)
|
| 122 |
+
results["semantic_equivalence"] = random.uniform(0.3, 0.8)
|
| 123 |
+
results["syntax_correctness"] = random.uniform(0.6, 0.95)
|
| 124 |
+
results["translation_quality"] = (results["structural_similarity"] +
|
| 125 |
+
results["semantic_equivalence"] +
|
| 126 |
+
results["syntax_correctness"]) / 3
|
| 127 |
+
|
| 128 |
+
results["functionality_preserved"] = results["translation_quality"] > 0.6
|
| 129 |
+
|
| 130 |
+
return results
|
| 131 |
+
|
| 132 |
+
def evaluate_translation_pairs(language_pairs: List[Dict]) -> Dict[str, Any]:
|
| 133 |
+
"""
|
| 134 |
+
Evaluate translation quality across language pairs.
|
| 135 |
+
|
| 136 |
+
Args:
|
| 137 |
+
language_pairs: List of cross-language implementation pairs
|
| 138 |
+
|
| 139 |
+
Returns:
|
| 140 |
+
Comprehensive translation evaluation metrics
|
| 141 |
+
"""
|
| 142 |
+
# TODO: Implement comprehensive evaluation
|
| 143 |
+
# Metrics:
|
| 144 |
+
# - Translation accuracy by language pair
|
| 145 |
+
# - Semantic preservation scores
|
| 146 |
+
# - Syntax correctness rates
|
| 147 |
+
# - Performance equivalence
|
| 148 |
+
|
| 149 |
+
total_pairs = len(language_pairs)
|
| 150 |
+
successful_translations = 0
|
| 151 |
+
quality_scores = []
|
| 152 |
+
language_pair_performance = defaultdict(list)
|
| 153 |
+
|
| 154 |
+
for pair in language_pairs:
|
| 155 |
+
source_code = pair.get("source_code", "")
|
| 156 |
+
target_code = pair.get("target_code", "")
|
| 157 |
+
source_lang = pair.get("source_language", "unknown")
|
| 158 |
+
target_lang = pair.get("target_language", "unknown")
|
| 159 |
+
|
| 160 |
+
result = simple_translation_evaluator(source_code, target_code,
|
| 161 |
+
source_lang, target_lang)
|
| 162 |
+
|
| 163 |
+
quality = result["translation_quality"]
|
| 164 |
+
quality_scores.append(quality)
|
| 165 |
+
|
| 166 |
+
if result["functionality_preserved"]:
|
| 167 |
+
successful_translations += 1
|
| 168 |
+
|
| 169 |
+
pair_key = f"{source_lang}->{target_lang}"
|
| 170 |
+
language_pair_performance[pair_key].append(quality)
|
| 171 |
+
|
| 172 |
+
# Calculate aggregate metrics
|
| 173 |
+
avg_quality = sum(quality_scores) / len(quality_scores) if quality_scores else 0
|
| 174 |
+
success_rate = successful_translations / total_pairs if total_pairs > 0 else 0
|
| 175 |
+
|
| 176 |
+
# Language pair performance
|
| 177 |
+
pair_stats = {}
|
| 178 |
+
for pair_key, scores in language_pair_performance.items():
|
| 179 |
+
pair_stats[pair_key] = {
|
| 180 |
+
"count": len(scores),
|
| 181 |
+
"avg_quality": sum(scores) / len(scores),
|
| 182 |
+
"success_rate": sum(1 for s in scores if s > 0.6) / len(scores)
|
| 183 |
+
}
|
| 184 |
+
|
| 185 |
+
return {
|
| 186 |
+
"total_pairs": total_pairs,
|
| 187 |
+
"successful_translations": successful_translations,
|
| 188 |
+
"success_rate": success_rate,
|
| 189 |
+
"average_quality": avg_quality,
|
| 190 |
+
"quality_distribution": {
|
| 191 |
+
"excellent": sum(1 for q in quality_scores if q > 0.8),
|
| 192 |
+
"good": sum(1 for q in quality_scores if 0.6 < q <= 0.8),
|
| 193 |
+
"fair": sum(1 for q in quality_scores if 0.4 < q <= 0.6),
|
| 194 |
+
"poor": sum(1 for q in quality_scores if q <= 0.4)
|
| 195 |
+
},
|
| 196 |
+
"language_pair_performance": pair_stats
|
| 197 |
+
}
|
| 198 |
+
|
| 199 |
+
def run_benchmark(repositories: List[Dict]) -> Dict[str, Any]:
|
| 200 |
+
"""
|
| 201 |
+
Run complete cross-language translation benchmark.
|
| 202 |
+
|
| 203 |
+
Args:
|
| 204 |
+
repositories: List of repository data
|
| 205 |
+
|
| 206 |
+
Returns:
|
| 207 |
+
Complete benchmark results
|
| 208 |
+
"""
|
| 209 |
+
print("Extracting cross-language pairs...")
|
| 210 |
+
language_pairs = extract_language_pairs(repositories)
|
| 211 |
+
|
| 212 |
+
print("Evaluating translation quality...")
|
| 213 |
+
metrics = evaluate_translation_pairs(language_pairs)
|
| 214 |
+
|
| 215 |
+
print("Analyzing language coverage...")
|
| 216 |
+
language_coverage = defaultdict(int)
|
| 217 |
+
for pair in language_pairs:
|
| 218 |
+
source_lang = pair.get("source_language", "unknown")
|
| 219 |
+
target_lang = pair.get("target_language", "unknown")
|
| 220 |
+
language_coverage[source_lang] += 1
|
| 221 |
+
language_coverage[target_lang] += 1
|
| 222 |
+
|
| 223 |
+
return {
|
| 224 |
+
"benchmark_info": {
|
| 225 |
+
"name": "Cross-Language Translation Benchmark",
|
| 226 |
+
"dataset": "CodeReality-1T",
|
| 227 |
+
"version": "1.0.0",
|
| 228 |
+
"description": "Evaluates code translation across programming languages",
|
| 229 |
+
"status": "PLANNED - Framework scaffold"
|
| 230 |
+
},
|
| 231 |
+
"dataset_stats": {
|
| 232 |
+
"total_repositories": len(repositories),
|
| 233 |
+
"total_language_pairs": len(language_pairs),
|
| 234 |
+
"avg_pairs_per_repo": len(language_pairs) / len(repositories) if repositories else 0,
|
| 235 |
+
"unique_languages": len(language_coverage)
|
| 236 |
+
},
|
| 237 |
+
"translation_metrics": metrics,
|
| 238 |
+
"language_coverage": dict(language_coverage),
|
| 239 |
+
"insights": [
|
| 240 |
+
"This is a planned benchmark - implementation needed",
|
| 241 |
+
"Cross-language translation requires semantic understanding",
|
| 242 |
+
"CodeReality-1T provides diverse language combinations",
|
| 243 |
+
"Noisy dataset challenges automated translation systems"
|
| 244 |
+
],
|
| 245 |
+
"recommendations": [
|
| 246 |
+
"Implement AST-based semantic analysis",
|
| 247 |
+
"Use execution-based validation when possible",
|
| 248 |
+
"Consider language-specific idiom preservation",
|
| 249 |
+
"Validate with human expert review for complex cases"
|
| 250 |
+
]
|
| 251 |
+
}
|
| 252 |
+
|
| 253 |
+
def print_benchmark_results(results: Dict[str, Any]):
|
| 254 |
+
"""Print formatted benchmark results."""
|
| 255 |
+
print("\n" + "="*60)
|
| 256 |
+
print("CROSS-LANGUAGE TRANSLATION BENCHMARK RESULTS")
|
| 257 |
+
print("="*60)
|
| 258 |
+
|
| 259 |
+
info = results["benchmark_info"]
|
| 260 |
+
print(f"Benchmark: {info['name']}")
|
| 261 |
+
print(f"Dataset: {info['dataset']}")
|
| 262 |
+
print(f"Status: {info['status']}")
|
| 263 |
+
print(f"Description: {info['description']}")
|
| 264 |
+
|
| 265 |
+
print("\nDataset Statistics:")
|
| 266 |
+
stats = results["dataset_stats"]
|
| 267 |
+
print(f" Total Repositories: {stats['total_repositories']}")
|
| 268 |
+
print(f" Language Pairs Found: {stats['total_language_pairs']}")
|
| 269 |
+
print(f" Avg Pairs/Repo: {stats['avg_pairs_per_repo']:.2f}")
|
| 270 |
+
print(f" Unique Languages: {stats['unique_languages']}")
|
| 271 |
+
|
| 272 |
+
print("\nTranslation Metrics:")
|
| 273 |
+
metrics = results["translation_metrics"]
|
| 274 |
+
print(f" Success Rate: {metrics['success_rate']:.3f}")
|
| 275 |
+
print(f" Average Quality: {metrics['average_quality']:.3f}")
|
| 276 |
+
|
| 277 |
+
print("\nQuality Distribution:")
|
| 278 |
+
dist = metrics["quality_distribution"]
|
| 279 |
+
print(f" Excellent (>0.8): {dist['excellent']}")
|
| 280 |
+
print(f" Good (0.6-0.8): {dist['good']}")
|
| 281 |
+
print(f" Fair (0.4-0.6): {dist['fair']}")
|
| 282 |
+
print(f" Poor (≤0.4): {dist['poor']}")
|
| 283 |
+
|
| 284 |
+
print("\nLanguage Coverage:")
|
| 285 |
+
for lang, count in results["language_coverage"].items():
|
| 286 |
+
print(f" {lang}: {count}")
|
| 287 |
+
|
| 288 |
+
print("\nKey Insights:")
|
| 289 |
+
for insight in results["insights"]:
|
| 290 |
+
print(f" • {insight}")
|
| 291 |
+
|
| 292 |
+
print("\nRecommendations:")
|
| 293 |
+
for rec in results["recommendations"]:
|
| 294 |
+
print(f" • {rec}")
|
| 295 |
+
|
| 296 |
+
def main():
|
| 297 |
+
"""Run cross-language translation benchmark on CodeReality-1T dataset."""
|
| 298 |
+
# Configuration
|
| 299 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 300 |
+
sample_size = 100 # Reduced for planning phase
|
| 301 |
+
|
| 302 |
+
print("CodeReality-1T Cross-Language Translation Benchmark")
|
| 303 |
+
print("Status: PLANNED - Framework scaffold only")
|
| 304 |
+
print(f"Data directory: {data_dir}")
|
| 305 |
+
print(f"Sample size: {sample_size}")
|
| 306 |
+
|
| 307 |
+
# Load dataset sample
|
| 308 |
+
print("\nLoading dataset sample...")
|
| 309 |
+
repositories = load_dataset_sample(data_dir, sample_size)
|
| 310 |
+
|
| 311 |
+
if not repositories:
|
| 312 |
+
print("No repositories loaded - using mock data for demonstration")
|
| 313 |
+
# Create mock data for demonstration
|
| 314 |
+
repositories = [{"name": f"multilang_repo_{i}", "languages": ["python", "javascript"]} for i in range(10)]
|
| 315 |
+
|
| 316 |
+
# Run benchmark
|
| 317 |
+
results = run_benchmark(repositories)
|
| 318 |
+
|
| 319 |
+
# Print results
|
| 320 |
+
print_benchmark_results(results)
|
| 321 |
+
|
| 322 |
+
# Save results
|
| 323 |
+
output_file = "cross_language_translation_results.json"
|
| 324 |
+
with open(output_file, 'w') as f:
|
| 325 |
+
json.dump(results, f, indent=2)
|
| 326 |
+
|
| 327 |
+
print(f"\nResults saved to: {output_file}")
|
| 328 |
+
print("Note: This is a framework scaffold - full implementation needed")
|
| 329 |
+
|
| 330 |
+
if __name__ == "__main__":
|
| 331 |
+
main()
|
benchmarks/license_detection_benchmark.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
License Detection Benchmark for CodeReality-1T Dataset
|
| 4 |
+
Evaluates automated license classification systems
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import json
|
| 8 |
+
import os
|
| 9 |
+
from typing import Dict, List, Tuple
|
| 10 |
+
from collections import defaultdict
|
| 11 |
+
import random
|
| 12 |
+
|
| 13 |
+
def load_dataset_sample(data_dir: str, sample_size: int = 1000) -> List[Dict]:
|
| 14 |
+
"""Load random sample of repositories from dataset."""
|
| 15 |
+
print(f"🔍 Loading sample of {sample_size} repositories...")
|
| 16 |
+
|
| 17 |
+
repositories = []
|
| 18 |
+
|
| 19 |
+
# Get available files
|
| 20 |
+
files = [f for f in os.listdir(data_dir) if f.endswith('.jsonl')]
|
| 21 |
+
random.shuffle(files)
|
| 22 |
+
|
| 23 |
+
for filename in files[:10]: # Sample from first 10 files
|
| 24 |
+
file_path = os.path.join(data_dir, filename)
|
| 25 |
+
try:
|
| 26 |
+
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
| 27 |
+
for line in f:
|
| 28 |
+
if len(repositories) >= sample_size:
|
| 29 |
+
break
|
| 30 |
+
try:
|
| 31 |
+
repo_data = json.loads(line)
|
| 32 |
+
repositories.append(repo_data)
|
| 33 |
+
except json.JSONDecodeError:
|
| 34 |
+
continue
|
| 35 |
+
except Exception as e:
|
| 36 |
+
print(f"⚠️ Error reading {filename}: {e}")
|
| 37 |
+
continue
|
| 38 |
+
|
| 39 |
+
if len(repositories) >= sample_size:
|
| 40 |
+
break
|
| 41 |
+
|
| 42 |
+
print(f"✅ Loaded {len(repositories)} repositories")
|
| 43 |
+
return repositories
|
| 44 |
+
|
| 45 |
+
def extract_license_features(repo: Dict) -> Dict:
|
| 46 |
+
"""Extract features that could indicate license presence."""
|
| 47 |
+
features = {
|
| 48 |
+
'has_license_file': False,
|
| 49 |
+
'has_readme': False,
|
| 50 |
+
'license_keywords_count': 0,
|
| 51 |
+
'copyright_mentions': 0,
|
| 52 |
+
'file_count': 0,
|
| 53 |
+
'detected_license': repo.get('license', 'Unknown')
|
| 54 |
+
}
|
| 55 |
+
|
| 56 |
+
files = repo.get('files', [])
|
| 57 |
+
features['file_count'] = len(files)
|
| 58 |
+
|
| 59 |
+
license_keywords = ['license', 'mit', 'apache', 'gpl', 'bsd', 'copyright']
|
| 60 |
+
|
| 61 |
+
for file_obj in files:
|
| 62 |
+
if isinstance(file_obj, dict):
|
| 63 |
+
file_path = file_obj.get('path', '').lower()
|
| 64 |
+
content = file_obj.get('content', '').lower()
|
| 65 |
+
|
| 66 |
+
# Check for license files
|
| 67 |
+
if any(keyword in file_path for keyword in ['license', 'copying', 'legal']):
|
| 68 |
+
features['has_license_file'] = True
|
| 69 |
+
|
| 70 |
+
# Check for README
|
| 71 |
+
if 'readme' in file_path:
|
| 72 |
+
features['has_readme'] = True
|
| 73 |
+
|
| 74 |
+
# Count license keywords
|
| 75 |
+
for keyword in license_keywords:
|
| 76 |
+
features['license_keywords_count'] += content.count(keyword)
|
| 77 |
+
|
| 78 |
+
# Count copyright mentions
|
| 79 |
+
features['copyright_mentions'] += content.count('copyright')
|
| 80 |
+
|
| 81 |
+
return features
|
| 82 |
+
|
| 83 |
+
def simple_license_classifier(features: Dict) -> str:
|
| 84 |
+
"""Simple rule-based license classifier for demonstration."""
|
| 85 |
+
|
| 86 |
+
# Rule-based classification
|
| 87 |
+
if features['has_license_file']:
|
| 88 |
+
if features['license_keywords_count'] > 10:
|
| 89 |
+
return 'MIT' # Most common
|
| 90 |
+
elif features['copyright_mentions'] > 5:
|
| 91 |
+
return 'Apache-2.0'
|
| 92 |
+
else:
|
| 93 |
+
return 'GPL-3.0'
|
| 94 |
+
elif features['has_readme'] and features['license_keywords_count'] > 3:
|
| 95 |
+
return 'MIT'
|
| 96 |
+
elif features['file_count'] > 50 and features['copyright_mentions'] > 2:
|
| 97 |
+
return 'Apache-2.0'
|
| 98 |
+
else:
|
| 99 |
+
return 'Unknown'
|
| 100 |
+
|
| 101 |
+
def evaluate_license_detection(repositories: List[Dict]) -> Dict:
|
| 102 |
+
"""Evaluate license detection performance."""
|
| 103 |
+
print("🧮 Evaluating license detection...")
|
| 104 |
+
|
| 105 |
+
results = {
|
| 106 |
+
'total_repos': len(repositories),
|
| 107 |
+
'predictions': [],
|
| 108 |
+
'ground_truth': [],
|
| 109 |
+
'accuracy': 0.0,
|
| 110 |
+
'license_distribution': defaultdict(int),
|
| 111 |
+
'prediction_distribution': defaultdict(int)
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
for repo in repositories:
|
| 115 |
+
features = extract_license_features(repo)
|
| 116 |
+
predicted_license = simple_license_classifier(features)
|
| 117 |
+
actual_license = features['detected_license']
|
| 118 |
+
|
| 119 |
+
results['predictions'].append(predicted_license)
|
| 120 |
+
results['ground_truth'].append(actual_license)
|
| 121 |
+
results['license_distribution'][actual_license] += 1
|
| 122 |
+
results['prediction_distribution'][predicted_license] += 1
|
| 123 |
+
|
| 124 |
+
# Calculate accuracy (note: actual dataset has mostly 'Unknown' licenses)
|
| 125 |
+
correct = sum(1 for p, a in zip(results['predictions'], results['ground_truth']) if p == a)
|
| 126 |
+
results['accuracy'] = correct / len(repositories) if repositories else 0
|
| 127 |
+
|
| 128 |
+
return results
|
| 129 |
+
|
| 130 |
+
def print_benchmark_results(results: Dict):
|
| 131 |
+
"""Print formatted benchmark results."""
|
| 132 |
+
print("=" * 60)
|
| 133 |
+
print("📊 LICENSE DETECTION BENCHMARK RESULTS")
|
| 134 |
+
print("=" * 60)
|
| 135 |
+
|
| 136 |
+
print(f"Total repositories evaluated: {results['total_repos']}")
|
| 137 |
+
print(f"Overall accuracy: {results['accuracy']:.3f}")
|
| 138 |
+
|
| 139 |
+
print("\n📈 Ground Truth Distribution:")
|
| 140 |
+
for license_type, count in sorted(results['license_distribution'].items(), key=lambda x: x[1], reverse=True)[:10]:
|
| 141 |
+
percentage = (count / results['total_repos']) * 100
|
| 142 |
+
print(f" {license_type}: {count} ({percentage:.1f}%)")
|
| 143 |
+
|
| 144 |
+
print("\n🎯 Prediction Distribution:")
|
| 145 |
+
for license_type, count in sorted(results['prediction_distribution'].items(), key=lambda x: x[1], reverse=True):
|
| 146 |
+
percentage = (count / results['total_repos']) * 100
|
| 147 |
+
print(f" {license_type}: {count} ({percentage:.1f}%)")
|
| 148 |
+
|
| 149 |
+
print("\n💡 Insights:")
|
| 150 |
+
print("- CodeReality-1T is deliberately noisy with 0% license detection")
|
| 151 |
+
print("- This benchmark demonstrates the challenge of license classification")
|
| 152 |
+
print("- Most repositories lack clear licensing information")
|
| 153 |
+
print("- Perfect for testing robustness of license detection systems")
|
| 154 |
+
|
| 155 |
+
def main():
|
| 156 |
+
"""Run license detection benchmark."""
|
| 157 |
+
print("🚀 CodeReality-1T License Detection Benchmark")
|
| 158 |
+
print("=" * 60)
|
| 159 |
+
|
| 160 |
+
# Configuration
|
| 161 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 162 |
+
sample_size = 500
|
| 163 |
+
|
| 164 |
+
if not os.path.exists(data_dir):
|
| 165 |
+
print(f"❌ Dataset directory not found: {data_dir}")
|
| 166 |
+
print("Please update the data_dir path to point to your CodeReality-1T dataset")
|
| 167 |
+
return
|
| 168 |
+
|
| 169 |
+
# Load dataset sample
|
| 170 |
+
repositories = load_dataset_sample(data_dir, sample_size)
|
| 171 |
+
|
| 172 |
+
if not repositories:
|
| 173 |
+
print("❌ No repositories loaded. Check dataset path.")
|
| 174 |
+
return
|
| 175 |
+
|
| 176 |
+
# Run evaluation
|
| 177 |
+
results = evaluate_license_detection(repositories)
|
| 178 |
+
|
| 179 |
+
# Print results
|
| 180 |
+
print_benchmark_results(results)
|
| 181 |
+
|
| 182 |
+
# Save results
|
| 183 |
+
output_file = "license_detection_results.json"
|
| 184 |
+
with open(output_file, 'w') as f:
|
| 185 |
+
# Convert defaultdict to regular dict for JSON serialization
|
| 186 |
+
results_json = {
|
| 187 |
+
'total_repos': results['total_repos'],
|
| 188 |
+
'accuracy': results['accuracy'],
|
| 189 |
+
'license_distribution': dict(results['license_distribution']),
|
| 190 |
+
'prediction_distribution': dict(results['prediction_distribution'])
|
| 191 |
+
}
|
| 192 |
+
json.dump(results_json, f, indent=2)
|
| 193 |
+
|
| 194 |
+
print(f"\n💾 Results saved to: {output_file}")
|
| 195 |
+
|
| 196 |
+
if __name__ == "__main__":
|
| 197 |
+
main()
|
data/README.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CodeReality-1T Data Directory
|
| 2 |
+
|
| 3 |
+
## Location
|
| 4 |
+
The complete 3TB dataset is located at:
|
| 5 |
+
```
|
| 6 |
+
/mnt/z/CodeReality_Final/unified_dataset/
|
| 7 |
+
```
|
| 8 |
+
|
| 9 |
+
## Contents
|
| 10 |
+
- **52,692 JSONL files** containing 397,475 repositories
|
| 11 |
+
- **Total size**: 3.05 TB uncompressed
|
| 12 |
+
- **Format**: JSONL (JSON Lines) with complete repository metadata
|
| 13 |
+
|
| 14 |
+
## File Structure
|
| 15 |
+
Each JSONL file contains repositories with:
|
| 16 |
+
- Source code files with full paths
|
| 17 |
+
- Git commit history and messages
|
| 18 |
+
- Issue tracking data
|
| 19 |
+
- Repository metadata (stars, forks, topics)
|
| 20 |
+
- License information
|
| 21 |
+
- Enhanced Blueprint metadata
|
| 22 |
+
|
| 23 |
+
## Usage
|
| 24 |
+
To access the data programmatically:
|
| 25 |
+
|
| 26 |
+
```python
|
| 27 |
+
import json
|
| 28 |
+
import os
|
| 29 |
+
|
| 30 |
+
data_dir = "/mnt/z/CodeReality_Final/unified_dataset"
|
| 31 |
+
for filename in os.listdir(data_dir):
|
| 32 |
+
if filename.endswith('.jsonl'):
|
| 33 |
+
with open(os.path.join(data_dir, filename), 'r') as f:
|
| 34 |
+
for line in f:
|
| 35 |
+
repo_data = json.loads(line)
|
| 36 |
+
# Process repository data
|
| 37 |
+
print(repo_data.get('name', 'Unknown'))
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
## Integrity Verification
|
| 41 |
+
Use the dataset index file for verification:
|
| 42 |
+
```bash
|
| 43 |
+
# Verify against index
|
| 44 |
+
python3 -c "
|
| 45 |
+
import json
|
| 46 |
+
with open('../analysis/dataset_index.json', 'r') as f:
|
| 47 |
+
index = json.load(f)
|
| 48 |
+
print(f'Total files: {len(index[\"files\"])}')
|
| 49 |
+
print(f'Total repositories: {sum(f[\"repository_count\"] for f in index[\"files\"])}')
|
| 50 |
+
"
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## See Also
|
| 54 |
+
- Dataset Card: `../docs/DATASET_CARD.md`
|
| 55 |
+
- Analysis Results: `../analysis/metrics.json`
|
| 56 |
+
- Evaluation Subset: `../eval/subset/`
|
data/enhanced_enhanced_enhanced_sustained_batch_000066_20250902_220910.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:152bae10366e4f91ed02998485c3146f089c0cbb978a83a1cb9fb67a3b2653c2
|
| 3 |
+
size 24399741
|
data/enhanced_enhanced_enhanced_sustained_batch_000101_20250829_181259.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:618eca264bfc53c294abd5ca803cab8a697530b03ebfda81c9d0b3980ecad1ce
|
| 3 |
+
size 14569292
|
data/enhanced_enhanced_enhanced_sustained_batch_000138_20250829_063128.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa23c0660cb88b8d0d5819670c3bc00ba80f2f79d3ef35dfb290c734f84b06c0
|
| 3 |
+
size 47493673
|
data/enhanced_enhanced_enhanced_sustained_batch_000750_20250829_004837.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f5a7c153dff6879503795bb9822179aa620b96127d2d4b739a9bfb39fb22ff8
|
| 3 |
+
size 42909991
|
data/enhanced_enhanced_enhanced_sustained_batch_001012_20250830_153945.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c8920dc536f1e67aaa8e00265c4cc173b58a235562917cfa8a1d28fde5b0a7b9
|
| 3 |
+
size 50237646
|
data/enhanced_enhanced_enhanced_sustained_batch_001122_20250905_155200.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0600c087b08c7183ea0ad75c846a366f59628f8762e8e37b7cb446e7ec04821c
|
| 3 |
+
size 172512507
|
data/enhanced_enhanced_enhanced_sustained_batch_001200_20250908_113430.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:651bae562d7e12a8511b357d045931b4a49c5f80883f96208a055e0eeb11f9e8
|
| 3 |
+
size 35007374
|
data/enhanced_enhanced_enhanced_sustained_batch_001231_20250909_120242.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ed9ab836f80f065ae0ae83d2f8583e2126ab0bfc36d8bc15c5253c8dbf96f69
|
| 3 |
+
size 46636495
|
data/enhanced_enhanced_enhanced_sustained_batch_001636_20250911_144513.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e577e7d7a36b15375b23585b130dc33e434fdbb2433f5ae59ddbbb1844bb3124
|
| 3 |
+
size 90108127
|
data/enhanced_enhanced_sustained_batch_000012_20250829_082023.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:661ee288ad2bd9e300e66d7a9159e7c1ea751809d553b59609b2f7e808c33dd1
|
| 3 |
+
size 13526675
|
data/enhanced_enhanced_sustained_batch_000022_20250913_172734.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae073dbc9b76fa1ae5cef4c7c90cc60c8f58d27f3fec331288f6b19d2d9e03c0
|
| 3 |
+
size 67672664
|
data/enhanced_enhanced_sustained_batch_000023_20250830_052929.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0d01a206057fcabd4666d849fee72d49f2713ad5b720e2bc5c099407bf40e38
|
| 3 |
+
size 17901677
|
data/enhanced_enhanced_sustained_batch_000026_20250910_170950.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55b73ad18759878cc0a92a1bca76ebce1aebe080759d60e38ab963e980d1a7f7
|
| 3 |
+
size 46780942
|
data/enhanced_enhanced_sustained_batch_000037_20250901_115620.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac2142a89a59723365a8a6c66665e614f2a48b9d1666097445486e56e1dd8798
|
| 3 |
+
size 30124818
|
data/enhanced_enhanced_sustained_batch_000037_20250905_185022.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21c9a9d9d1a994c294469117b65e85294bd8ceaa0c32f11536ca8332d1810857
|
| 3 |
+
size 99216661
|
data/enhanced_enhanced_sustained_batch_000042_20250909_195036.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6fb1d4a6078b90e758145333d328c2719c5b2385c398b648bf40bbc591b34686
|
| 3 |
+
size 84983248
|
data/enhanced_enhanced_sustained_batch_000052_20250901_000828.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c038e55b0eba2ba221f34f8573f42e712dc457e92b61485d87d10f72b7880a6
|
| 3 |
+
size 82877450
|
data/enhanced_enhanced_sustained_batch_000055_20250906_072036.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d4fb115385a2e7351050ee2b1eae715755f01ae6424145b6e5a295388ebddb4
|
| 3 |
+
size 19245001
|
data/enhanced_enhanced_sustained_batch_000065_20250911_175251.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ffc86b15620503198c17fe20a4d473ca7352acc0b96e8be57fadd3a363d4a0d
|
| 3 |
+
size 56399520
|
data/enhanced_enhanced_sustained_batch_000068_20250913_181533.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:92004e3ac137d1bb936b9d5c46f3463836a3b78e292ed33cbd41dd984fa77e3e
|
| 3 |
+
size 51179247
|
data/enhanced_enhanced_sustained_batch_000071_20250905_193927.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:141fbb88ed0e8caff85f1da92403776f26598fed7ad7e5abd1164c2b15d2e5d6
|
| 3 |
+
size 100104895
|
data/enhanced_enhanced_sustained_batch_000075_20250905_194728.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd2c8a341db9149fda9dcf1afd44c54061349a9e81aac564c27f549400ce8f45
|
| 3 |
+
size 58335276
|
data/enhanced_enhanced_sustained_batch_000080_20250906_140532.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1616cd71737022d1e01edbfd484227b8b851f201ab3bc9b916d067be9d82a499
|
| 3 |
+
size 99218381
|
data/enhanced_enhanced_sustained_batch_000086_20250903_223328.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b90b78fa18d59c9445f5aa57e566d83781cd019fb6d8c3054d0081d18570c4b
|
| 3 |
+
size 51919759
|
data/enhanced_enhanced_sustained_batch_000092_20250904_224315.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:282210672f7ed5a8c91460516b7bf7a38e8fefd1450e440607539cc6343a43a9
|
| 3 |
+
size 51884141
|
data/enhanced_enhanced_sustained_batch_000100_20250902_010630.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:779716fab7aa2f7ac359316d60fe093f2715aa407ed9aa0a89d39aef94547b0d
|
| 3 |
+
size 22062752
|
data/enhanced_enhanced_sustained_batch_000106_20250911_182607.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f04d1bc1d1866af28f845b459e81851affa38a587b33a0a3627811495d35b2e9
|
| 3 |
+
size 46704056
|
data/enhanced_enhanced_sustained_batch_000107_20250831_125844.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e20f77e511e6ac18063be909291b222fffb7405cf5613eefd0e16efe038f5241
|
| 3 |
+
size 51908668
|
data/enhanced_enhanced_sustained_batch_000112_20250907_203743.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:906540c6c314c174d91e68e3f0ad54aabcba16aed6f7a6700549d6a25f535402
|
| 3 |
+
size 90415466
|
data/enhanced_enhanced_sustained_batch_000116_20250831_130816.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87c606bf0ea7da59679f7fcd5f9820a5036d5690aa0de44f34ec2778845bdccc
|
| 3 |
+
size 72071363
|
data/enhanced_enhanced_sustained_batch_000133_20250902_013852.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2ace196dad6f0f75c85bcf03e8003694ed1acca85842558a830659aecc7888f0
|
| 3 |
+
size 22277384
|
data/enhanced_enhanced_sustained_batch_000139_20250829_183519.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:796c98b2953c813858cce3f7c48d24ffad1212f520dd2c6c62b22f6e281bbb05
|
| 3 |
+
size 17902843
|
data/enhanced_enhanced_sustained_batch_000144_20250827_222718.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2166e686dc9db105868228bc84aa91202b30a3086f43e41d407dd1013ef70aad
|
| 3 |
+
size 28235648
|
data/enhanced_enhanced_sustained_batch_000145_20250905_213029.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7bae958e64b7980aa5ad7aa3188b718c20b31d9f705924a8d8eb892043745327
|
| 3 |
+
size 99216255
|
data/enhanced_enhanced_sustained_batch_000148_20250909_211703.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8d1949dfdb3ddc2607085025f0f6c04c81e11f46faa1e91b927b8646b853596
|
| 3 |
+
size 90437958
|
data/enhanced_enhanced_sustained_batch_000175_20250913_200555.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6c7f0432e24a23238cbd5ec4d04ae6e03523cd1de35c5b2197d65bfe7659fef3
|
| 3 |
+
size 55360172
|
data/enhanced_enhanced_sustained_batch_000192_20250830_192234.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0e8c1d8b21b50b1a7b490db7bbb3abc089d23cf1e9e37c43ada99faa29934929
|
| 3 |
+
size 43453726
|