๐ ๏ธ Fine-tuned Tool-Calling LLM (LoRA + Merged) for Geospatial Operations
This repository contains a fine-tuned Large Language Model (LLM) capable of structured tool/function calling, optimized for integration with backend services such as Model Context Protocol (MCP) for geospatial, file analysis, and automation tasks.
Note:This repo only has the lora adapter of the model.You can merge it with the Qwen2.5-Coder-1.5B-Instruct base model.
Model Structure
lora_model/
LoRA (Low-Rank Adaptation) adapters and tokenizer files for efficient fine-tuning and inference.
adapter_model.safetensors,adapter_config.jsonโ LoRA adapter weights and config.tokenizer.json,tokenizer_config.json,special_tokens_map.json,vocab.jsonโ Tokenizer files.chat_template.jinja,merges.txt,added_tokens.jsonโ (If applicable) chat formatting and tokenization.
merged_model/
The base model merged with LoRA adapters for direct use without extra configuration.
model-00001-of-00002.safetensors,model-00002-of-00002.safetensorsโ Model weights (split).config.json,generation_config.jsonโ Model and generation configs.tokenizer.json,tokenizer_config.json,special_tokens_map.json,vocab.jsonโ Tokenizer files.chat_template.jinja,merges.txt,added_tokens.jsonโ (If applicable) chat formatting and tokenization.
How to Use
Loading the Merged Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("path/to/merged_model")
tokenizer = AutoTokenizer.from_pretrained("path/to/merged_model")
# Example inference
input_text = "Analyze the file C:/data/image.tif"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Loading with LoRA Adapter (Optional)
If you want to use the LoRA adapter separately:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("your/base-model")
tokenizer = AutoTokenizer.from_pretrained("path/to/lora_model")
model = PeftModel.from_pretrained(base_model, "path/to/lora_model")
Example inference...
Intended Use Natural language to tool-calling JSON conversion for backend automation.
Geospatial and image operations via MCP or custom tools.
Easily extensible for new tool schemas.
Example Input/Output Prompt:
Crop the image C:/images/sample.tif to bounding box [xmin, ymin, xmax, ymax].
Model Output:
TOOL_NEEDED: crop_image
PARAMS: {"filepath": "C:/images/sample.tif", "minx": ..., "miny": ..., "maxx": ..., "maxy": ...}
Training Details
Fine-tuned for structured function-calling on domain-specific data.
Supports both merged and LoRA-adapter workflows.
For technical backend details, see the project repository.
Citation
@misc{finetuned_llm_mcp_2025, author = {ฤฐsmail Emre Candan}, title = {Fine-tuned Tool-Calling LLM (LoRA + Merged)}, year = {2025}, url = {https://github.com/EmreCandan0/fine-tuned-llm-tool-calling} }
- Downloads last month
- 34
Model tree for emrecandan0/Qwen2.5-MCP-Tool-Calling
Base model
Qwen/Qwen2.5-1.5B