# dissimilar_LoRA_64 Fine-tuned LLaMA model on QA_CODE_SUMMARIZATION dataset. - **LoRA**: Enabled - **LoRA Rank**: 64 - **Tasks**: QA_CODE_SUMMARIZATION - **Base Model**: LLaMA 1B - **Optimizer**: AdamW - **Batch Size**: 4 Trained using the 🤗 Transformers `Trainer` API.