Qwen2.5-7B Merged Model for Text-to-SQL

This is a fully merged model (base + LoRA) ready for direct use. No need to load adapters separately!

Quick Links

Performance

Spider Benchmark (200 examples)

Metric Score
Exact Match 0.00%
Normalized Match 0.50%
Component Accuracy 92.60%
Average Similarity 25.47%

Training Metrics

Metric Base Fine-tuned Improvement
Loss 2.1301 0.4098 80.76% ⬆️
Perplexity 8.4155 1.5064 82.10% ⬆️

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "vindows/qwen2.5-7b-text-to-sql-merged",
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(
    "vindows/qwen2.5-7b-text-to-sql-merged",
    trust_remote_code=True
)

# Generate SQL from natural language
prompt = """Convert the following natural language question to SQL:

Database: concert_singer
Question: How many singers do we have?

SQL:"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, temperature=0.1, do_sample=False)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)

# Extract SQL (remove prompt and extra text)
sql = result.split("SQL:")[-1].strip().split('\n\n')[0]
print(sql)

Model Size

  • Parameters: 7B
  • Disk Size: ~16GB
  • Recommended GPU: 24GB+ VRAM

Limitations

See the LoRA adapter model card for detailed limitations and recommendations.

License

Apache 2.0

Downloads last month
9
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vindows/qwen2.5-7b-text-to-sql-merged

Base model

Qwen/Qwen2.5-7B
Finetuned
(2259)
this model