| # CodeReality-1T Evaluation Benchmarks | |
| This directory contains demonstration benchmark scripts for evaluating models on the CodeReality-1T dataset. | |
| ## Available Benchmarks | |
| ### 1. License Detection Benchmark | |
| **File**: `license_detection_benchmark.py` | |
| **Purpose**: Evaluates automated license classification systems on deliberately noisy data. | |
| **Features**: | |
| - Rule-based feature extraction from repository content | |
| - Simple classification model for demonstration | |
| - Performance metrics on license detection accuracy | |
| - Analysis of license distribution patterns | |
| **Usage**: | |
| ```bash | |
| cd /path/to/codereality-1t/eval/benchmarks | |
| python3 license_detection_benchmark.py | |
| ``` | |
| **Expected Results**: | |
| - Low accuracy due to deliberately noisy dataset (0% license detection by design) | |
| - Demonstrates robustness testing for license detection systems | |
| - Outputs detailed distribution analysis | |
| ### 2. Code Completion Benchmark | |
| **File**: `code_completion_benchmark.py` | |
| **Purpose**: Evaluates code completion models using Pass@k metrics on real-world noisy code. | |
| **Features**: | |
| - Function extraction from Python, JavaScript, Java files | |
| - Simple rule-based completion model for demonstration | |
| - Pass@1, Pass@3, Pass@5 metric calculation | |
| - Multi-language support with language-specific patterns | |
| **Usage**: | |
| ```bash | |
| cd /path/to/codereality-1t/eval/benchmarks | |
| python3 code_completion_benchmark.py | |
| ``` | |
| **Expected Results**: | |
| - Baseline performance metrics for comparison | |
| - Language distribution analysis | |
| - Quality scoring of completions | |
| ## Benchmark Characteristics | |
| ### Dataset Integration | |
| - **Data Source**: Loads from `/mnt/z/CodeReality_Final/unified_dataset` by default | |
| - **Sampling**: Uses random sampling for performance (configurable) | |
| - **Formats**: Handles JSONL repository format from CodeReality-1T | |
| ### Evaluation Philosophy | |
| - **Deliberately Noisy**: Tests model robustness on real-world messy data | |
| - **Baseline Metrics**: Provides simple baselines for comparison (not production-ready) | |
| - **Reproducible**: Deterministic evaluation with random seed control | |
| - **Research Focus**: Results show challenges of noisy data, not competitive benchmarks | |
| ### Extensibility | |
| - **Modular Design**: Easy to extend with new benchmarks | |
| - **Configurable**: Sample sizes and evaluation criteria can be adjusted | |
| - **Multiple Languages**: Framework supports cross-language evaluation | |
| ## Configuration | |
| ### Data Path Configuration | |
| Update the `data_dir` variable in each script to point to your CodeReality-1T dataset: | |
| ```python | |
| data_dir = "/path/to/your/codereality-1t/unified_dataset" | |
| ``` | |
| ### Sample Size Adjustment | |
| Modify sample sizes for performance tuning: | |
| ```python | |
| sample_size = 500 # Adjust based on computational resources | |
| ``` | |
| ## Output Files | |
| Each benchmark generates JSON results files: | |
| - `license_detection_results.json` | |
| - `code_completion_results.json` | |
| These contain detailed metrics and can be used for comparative analysis. | |
| ### Sample Results | |
| Example results are available in `../results/`: | |
| - `license_detection_sample_results.json` - Baseline license detection performance | |
| - `code_completion_sample_results.json` - Baseline code completion metrics | |
| These demonstrate expected performance on CodeReality-1T's deliberately noisy data. | |
| ## Requirements | |
| ### Python Dependencies | |
| ```bash | |
| pip install json os re random typing collections | |
| ``` | |
| ### System Requirements | |
| - **Memory**: Minimum 4GB RAM for default sample sizes | |
| - **Storage**: Access to CodeReality-1T dataset (3TB) | |
| - **Compute**: Single-core sufficient for demonstration scripts | |
| ## Extending the Benchmarks | |
| ### Adding New Tasks | |
| 1. Create new Python file following naming convention: `{task}_benchmark.py` | |
| 2. Implement standard evaluation interface: | |
| ```python | |
| def load_dataset_sample(data_dir, sample_size) | |
| def run_benchmark(repositories) | |
| def print_benchmark_results(results) | |
| ``` | |
| 3. Add task-specific evaluation metrics | |
| ### Supported Tasks | |
| Current benchmarks cover: | |
| - **License Detection**: Classification and compliance | |
| - **Code Completion**: Generation and functional correctness | |
| **Framework Scaffolds (PLANNED - Implementation Needed)**: | |
| - [`bug_detection_benchmark.py`](bug_detection_benchmark.py) - Bug detection on commit pairs (scaffold only) | |
| - [`cross_language_translation_benchmark.py`](cross_language_translation_benchmark.py) - Code translation across languages (scaffold only) | |
| **Future Planned Benchmarks - Roadmap**: | |
| - **v1.1.0 (Q1 2025)**: Complete bug detection and cross-language translation implementations | |
| - **v1.2.0 (Q2 2025)**: Repository classification and domain detection benchmarks | |
| - **v1.3.0 (Q3 2025)**: Build system analysis and validation frameworks | |
| - **v2.0.0 (Q4 2025)**: Commit message generation and issue-to-code alignment benchmarks | |
| **Community Priority**: Framework scaffolds ready for community implementation! | |
| ## Performance Notes | |
| ### Computational Complexity | |
| - **License Detection**: O(n) where n = repository count | |
| - **Code Completion**: O(n*m) where m = average functions per repository | |
| ### Optimization Tips | |
| 1. **Sampling**: Reduce sample_size for faster execution | |
| 2. **Filtering**: Pre-filter repositories by criteria | |
| 3. **Parallelization**: Use multiprocessing for large-scale evaluation | |
| 4. **Caching**: Cache extracted features for repeated runs | |
| ## Research Applications | |
| ### Model Development | |
| - **Robustness Testing**: Test models on noisy, real-world data | |
| - **Baseline Comparison**: Compare against simple rule-based systems | |
| - **Cross-domain Evaluation**: Test generalization across domains | |
| ### Data Science Research | |
| - **Curation Methods**: Develop better filtering techniques | |
| - **Quality Metrics**: Research automated quality assessment | |
| - **Bias Analysis**: Study representation bias in large datasets | |
| ## Citation | |
| When using these benchmarks in research, please cite the CodeReality-1T dataset: | |
| ```bibtex | |
| @misc{codereality2025, | |
| title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding}, | |
| author={Vincenzo Gallo}, | |
| year={2025}, | |
| publisher={Hugging Face}, | |
| howpublished={\\url{https://huggingface.co/vinsblack}}, | |
| note={Version 1.0.0} | |
| } | |
| ``` | |
| ## Support | |
| - **Issues**: https://github.com/vinsguru/codereality-1t/issues | |
| - **Contact**: [email protected] | |
| - **Documentation**: See main dataset README and documentation |