--- language: - en tags: - code - rust - payment-processing - curriculum-learning - continued-pretraining - hyperswitch size_categories: - 10K" } ``` ### Commit Entry ```json { "type": "commit", "commit_hash": "73203ebd05beab57f243e8460f259707bb856921", "author": "vasanthp-jus", "date": "2025-11-27T12:18:26+05:30", "message": "fix-postman-collection", "training_content": "Commit: \"fix-postman-collection\"\nAuthor: vasanthp-jus\nDate: 2025-11-27T12:18:26+05:30\n\nDiff:\n" } ``` ### PR Entry ```json { "type": "pr_diff", "pr_number": 1234, "title": "Add PayPal connector support", "state": "merged", "author": "developer-name", "created_at": "2025-11-15T10:30:00Z", "training_content": "PR #1234: Add PayPal connector support\n\n\n\nReviews:\n\n\nComments:\n" } ``` ### Test Pair Entry ```json { "type": "test_pair", "test_file": "crates/router/tests/connector_tests.rs", "impl_file": "crates/router/src/connector.rs", "training_content": "Test-Implementation Pair:\n\nTest: \n\nImplementation: " } ``` ## 🔢 Dataset Statistics | Phase | Entries | Content Types | Avg Entry Size | |-------|---------|---------------|----------------| | Phase 1 | ~15K | Files, Test Pairs | Varies (complete files) | | Phase 2 | ~5K | Commits, Small PRs | Varies (complete commits/PRs) | | Phase 3 | ~1K | Medium/Large PRs | Large (complete PR threads) | **Total**: ~21K complete, unbroken entries ## 💡 Unbroken vs Chunked ### Unbroken (This Dataset) ✅ Complete semantic units preserved ✅ No artificial breaks in code/diffs ✅ Flexible for any sequence length ✅ Chunk dynamically during training ✅ Smaller dataset file size (no overlap) ### Chunked (Alternative) - Pre-chunked at fixed token limit (e.g., 8K) - Ready for immediate training - Fixed sequence length - Includes chunk overlap for continuity ## 🚀 Usage ### Loading the Dataset ```python import json def load_phase(phase_file): """Load a curriculum phase.""" entries = [] with open(phase_file, 'r', encoding='utf-8') as f: for line in f: entries.append(json.loads(line)) return entries # Load Phase 1 phase1 = load_phase('phase1_foundation.jsonl') ``` ### Dynamic Chunking for Training ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("your-model") max_length = 32768 # 32K tokens def chunk_entry(entry, tokenizer, max_length): """Chunk a complete entry for training.""" text = entry['training_content'] # Tokenize tokens = tokenizer(text, truncation=False, return_tensors='pt') # Split into chunks if needed chunks = [] token_ids = tokens['input_ids'][0] for i in range(0, len(token_ids), max_length): chunk = token_ids[i:i + max_length] chunks.append(chunk) return chunks # Process entries for entry in phase1: chunks = chunk_entry(entry, tokenizer, max_length) for chunk in chunks: # Use chunk for training pass ``` ### Recommended Training Schedule ```python # Phase 1: Code Foundation (2 epochs) train(phase1_foundation, epochs=2, lr=1e-5) # Phase 2: Evolution Patterns (2-3 epochs) train(phase2_evolution, epochs=3, lr=8e-6) # Phase 3: PR Mastery (3-4 epochs) train(phase3_pr_mastery, epochs=4, lr=5e-6) ``` ## 🎓 Curriculum Learning Benefits - **Progressive complexity**: Start simple, increase difficulty - **Better convergence**: 25-40% improvement over random training - **Domain adaptation**: Learn repository-specific patterns - **Code understanding**: Syntax → Changes → Collaboration - **Efficient training**: Focused learning objectives per phase ## 📝 Technical Details ### Repository - **Source**: [Hyperswitch](https://github.com/juspay/hyperswitch) - **Language**: Primarily Rust - **Domain**: Payment processing, financial technology - **Components**: Connectors, API models, routing logic, state machines ### Data Collection - **Files**: Pattern-based extraction (Rust, TOML, YAML, JSON, Markdown) - **Commits**: Full git history from repository inception - **PRs**: Merged and closed PRs with reviews and comments via GitHub API - **Tests**: Automatic pairing of test files with implementations ## 🔧 Sequence Length Flexibility This unbroken dataset works with any sequence length: | Sequence Length | Use Case | Chunking Strategy | |----------------|----------|-------------------| | 8K tokens | Base models | Chunk with overlap | | 16K tokens | Extended context | Fewer chunks needed | | 32K tokens | Long context models | Most files fit whole | | 64K+ tokens | Ultra-long context | Complete commits/PRs | ## 🙏 Acknowledgments - **Hyperswitch Team** at Juspay for the amazing open-source payment processing platform - Dataset curated and organized by **Aditya Narayan** - Dataset generated using custom extraction pipeline with curriculum organization ## 📧 Contact & Citation If you use this dataset, please cite: ```bibtex @dataset{hyperswitch_curriculum2025, title = {AdityaNarayan/HS-Repo-Curriculum-Learning}, author = {Aditya Narayan}, year = {2025}, url = {https://huggingface.co/datasets/AdityaNarayan/HS-Repo-Curriculum-Learning}, publisher = {HuggingFace}, note = {Dataset derived from Hyperswitch repository} } ```