Qwen2.5-0.5B Merged Model for Text-to-SQL

This is a fully merged model (base + LoRA) ready for direct use. No need to load adapters separately!

Quick Links

Performance

Spider Benchmark (200 examples)

Metric Score
Exact Match 0.00%
Normalized Match 0.00%
Component Accuracy 91.94%
Average Similarity 21.78%

Training Metrics

Metric Base Fine-tuned Improvement
Loss 2.1429 0.5823 72.83% ⬆️
Perplexity 8.5244 1.7901 79.00% ⬆️

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "vindows/qwen2.5-0.5b-text-to-sql-merged",
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(
    "vindows/qwen2.5-0.5b-text-to-sql-merged",
    trust_remote_code=True
)

# Generate SQL from natural language
prompt = """Convert the following natural language question to SQL:

Database: concert_singer
Question: How many singers do we have?

SQL:"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, temperature=0.1, do_sample=False)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)

# Extract SQL (remove prompt and extra text)
sql = result.split("SQL:")[-1].strip().split('\n\n')[0]
print(sql)

Model Size

  • Parameters: 0.5B
  • Disk Size: ~2GB
  • Recommended GPU: 8GB+ VRAM

Limitations

See the LoRA adapter model card for detailed limitations and recommendations.

License

Apache 2.0

Downloads last month
12
Safetensors
Model size
0.5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vindows/qwen2.5-0.5b-text-to-sql-merged

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(576)
this model