Medical Advisor QLoRA
This is a QLoRA (4-bit quantized LoRA) adapter fine-tuned for medical lab result analysis conversations.
Model Details
- Base Model: unsloth/Qwen3-8B-unsloth-bnb-4bit
- Training Method: QLoRA with Unsloth optimization
- Dataset: Custom medical lab analysis dataset
- Training Steps: 100
- LoRA Rank: 32
- Target Modules: All linear layers (q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)
Performance
- Format Consistency: 100% (based on previous training)
- Response Length: Optimal (based on previous training)
- Test Accuracy: Perfect format matching on 20% holdout set (based on previous training)
Usage
from unsloth import FastLanguageModel
from peft import PeftModel
# Load base model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Qwen3-8B-unsloth-bnb-4bit",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
# Load adapter
model = PeftModel.from_pretrained(model, "kaushik2202/medical-advisor-qlora")
# Enable inference mode
FastLanguageModel.for_inference(model)
# Use for medical lab analysis
prompt = """Human: I'm a 45-year-old male who just got my lab results back. I'd like to understand what they mean, especially in context of my lifestyle.
**Lab Results:** โข HbA1c: 6.2% โข Total Cholesterol: 180.0 mg/dL โข Hdl Cholesterol: 45.0 mg/dL
**My Lifestyle:** โข Vigorous exercise: 20.0 days โข Sleep duration: 7.0 hours
Can you explain what these results mean for my health, considering my age and lifestyle factors?"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Expected Output Format
The model provides structured medical analysis with:
- Age-specific context
- Professional medical formatting
- Lab value ranges and interpretations
- Lifestyle recommendations
- Clear next steps
Example response format:
Assistant: I'll analyze your lab results in the context of your age (45) and lifestyle factors.
## ๐ฌ Your Lab Results Analysis
**HbA1c: 6.2 %** (prediabetic)
โข Range: 5.7-6.4%
โข Health impact: Pre-diabetes - lifestyle intervention recommended
**Total Cholesterol: 180.0 mg/dL** (optimal)
โข Range: <200 mg/dL
โข Health impact: Low heart disease risk
[... continued analysis ...]
Training Details
- Dataset Size: 830 examples (based on previous training, may vary with new Qwen3 training)
- Training Examples: 80% split (based on previous training)
- Validation Examples: 20% holdout (based on previous training)
- Loss Convergence: Observed during training
- Evaluation Performance: (Will be evaluated during training)
- Memory Efficiency: 1.09% trainable parameters
Model Architecture
- Trainable Parameters: 87,293,952 (1.09% trained)
- Total Parameters: 8,000,000,000
- Quantization: 4-bit with BitsAndBytes
- LoRA Configuration: Rank 32, Alpha 16, Dropout 0.05
- Hardware: NVIDIA A100-SXM4-40GB
License
This model inherits the Llama 2 license. Use responsibly for educational purposes only.
โ ๏ธ Disclaimer: Not intended for actual medical diagnosis. Always consult healthcare professionals for medical advice.
Citation
If you use this model, please cite:
@model{medical-advisor-qlora,
author = {kaushik2202},
title = {Medical Advisor QLoRA},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/kaushik2202/medical-advisor-qlora}
}
Training Configuration
- Base Model: Qwen3-8B (4-bit quantized)
- Framework: Unsloth + Transformers + PEFT
- Optimizer: AdamW 8-bit
- Learning Rate: 2e-4 with linear scheduler
- Batch Size: 2 (effective batch size: 8 with gradient accumulation)
- Sequence Length: 2048 tokens
- Hardware: NVIDIA A100-SXM4-40GB
Use Cases
- Medical lab result analysis
- Healthcare consultation
- Lifestyle guidance based on health data
- Medical education
- Downloads last month
- 3
Hardware compatibility
Log In
to view the estimation
16-bit