File size: 6,289 Bytes
6759906 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
# CodeReality-1T Evaluation Benchmarks
This directory contains demonstration benchmark scripts for evaluating models on the CodeReality-1T dataset.
## Available Benchmarks
### 1. License Detection Benchmark
**File**: `license_detection_benchmark.py`
**Purpose**: Evaluates automated license classification systems on deliberately noisy data.
**Features**:
- Rule-based feature extraction from repository content
- Simple classification model for demonstration
- Performance metrics on license detection accuracy
- Analysis of license distribution patterns
**Usage**:
```bash
cd /path/to/codereality-1t/eval/benchmarks
python3 license_detection_benchmark.py
```
**Expected Results**:
- Low accuracy due to deliberately noisy dataset (0% license detection by design)
- Demonstrates robustness testing for license detection systems
- Outputs detailed distribution analysis
### 2. Code Completion Benchmark
**File**: `code_completion_benchmark.py`
**Purpose**: Evaluates code completion models using Pass@k metrics on real-world noisy code.
**Features**:
- Function extraction from Python, JavaScript, Java files
- Simple rule-based completion model for demonstration
- Pass@1, Pass@3, Pass@5 metric calculation
- Multi-language support with language-specific patterns
**Usage**:
```bash
cd /path/to/codereality-1t/eval/benchmarks
python3 code_completion_benchmark.py
```
**Expected Results**:
- Baseline performance metrics for comparison
- Language distribution analysis
- Quality scoring of completions
## Benchmark Characteristics
### Dataset Integration
- **Data Source**: Loads from `/mnt/z/CodeReality_Final/unified_dataset` by default
- **Sampling**: Uses random sampling for performance (configurable)
- **Formats**: Handles JSONL repository format from CodeReality-1T
### Evaluation Philosophy
- **Deliberately Noisy**: Tests model robustness on real-world messy data
- **Baseline Metrics**: Provides simple baselines for comparison (not production-ready)
- **Reproducible**: Deterministic evaluation with random seed control
- **Research Focus**: Results show challenges of noisy data, not competitive benchmarks
### Extensibility
- **Modular Design**: Easy to extend with new benchmarks
- **Configurable**: Sample sizes and evaluation criteria can be adjusted
- **Multiple Languages**: Framework supports cross-language evaluation
## Configuration
### Data Path Configuration
Update the `data_dir` variable in each script to point to your CodeReality-1T dataset:
```python
data_dir = "/path/to/your/codereality-1t/unified_dataset"
```
### Sample Size Adjustment
Modify sample sizes for performance tuning:
```python
sample_size = 500 # Adjust based on computational resources
```
## Output Files
Each benchmark generates JSON results files:
- `license_detection_results.json`
- `code_completion_results.json`
These contain detailed metrics and can be used for comparative analysis.
### Sample Results
Example results are available in `../results/`:
- `license_detection_sample_results.json` - Baseline license detection performance
- `code_completion_sample_results.json` - Baseline code completion metrics
These demonstrate expected performance on CodeReality-1T's deliberately noisy data.
## Requirements
### Python Dependencies
```bash
pip install json os re random typing collections
```
### System Requirements
- **Memory**: Minimum 4GB RAM for default sample sizes
- **Storage**: Access to CodeReality-1T dataset (3TB)
- **Compute**: Single-core sufficient for demonstration scripts
## Extending the Benchmarks
### Adding New Tasks
1. Create new Python file following naming convention: `{task}_benchmark.py`
2. Implement standard evaluation interface:
```python
def load_dataset_sample(data_dir, sample_size)
def run_benchmark(repositories)
def print_benchmark_results(results)
```
3. Add task-specific evaluation metrics
### Supported Tasks
Current benchmarks cover:
- **License Detection**: Classification and compliance
- **Code Completion**: Generation and functional correctness
**Framework Scaffolds (PLANNED - Implementation Needed)**:
- [`bug_detection_benchmark.py`](bug_detection_benchmark.py) - Bug detection on commit pairs (scaffold only)
- [`cross_language_translation_benchmark.py`](cross_language_translation_benchmark.py) - Code translation across languages (scaffold only)
**Future Planned Benchmarks - Roadmap**:
- **v1.1.0 (Q1 2025)**: Complete bug detection and cross-language translation implementations
- **v1.2.0 (Q2 2025)**: Repository classification and domain detection benchmarks
- **v1.3.0 (Q3 2025)**: Build system analysis and validation frameworks
- **v2.0.0 (Q4 2025)**: Commit message generation and issue-to-code alignment benchmarks
**Community Priority**: Framework scaffolds ready for community implementation!
## Performance Notes
### Computational Complexity
- **License Detection**: O(n) where n = repository count
- **Code Completion**: O(n*m) where m = average functions per repository
### Optimization Tips
1. **Sampling**: Reduce sample_size for faster execution
2. **Filtering**: Pre-filter repositories by criteria
3. **Parallelization**: Use multiprocessing for large-scale evaluation
4. **Caching**: Cache extracted features for repeated runs
## Research Applications
### Model Development
- **Robustness Testing**: Test models on noisy, real-world data
- **Baseline Comparison**: Compare against simple rule-based systems
- **Cross-domain Evaluation**: Test generalization across domains
### Data Science Research
- **Curation Methods**: Develop better filtering techniques
- **Quality Metrics**: Research automated quality assessment
- **Bias Analysis**: Study representation bias in large datasets
## Citation
When using these benchmarks in research, please cite the CodeReality-1T dataset:
```bibtex
@misc{codereality2025,
title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
author={Vincenzo Gallo},
year={2025},
publisher={Hugging Face},
howpublished={\\url{https://huggingface.co/vinsblack}},
note={Version 1.0.0}
}
```
## Support
- **Issues**: https://github.com/vinsguru/codereality-1t/issues
- **Contact**: [email protected]
- **Documentation**: See main dataset README and documentation |