mini-coder-1.7b

mini-coder-1.7b is a 1.7B parameter model distilled from Qwen 3 Coder 30B A3B. It punches well above its weight, outperforming SWE-agent-LM 7B on SWE-bench Verified Bash only:

Model pass@1 pass@100
Qwen 3 Coder 30B-A3B 33.2 67.4
mini-swe-4b 26.8 60.2
gpt-oss-120b 26.0 –
mini-swe-1.7b 18.6 50.4
SWE-agent-LM 7B 15.2 –
Qwen 3 4B Instruct 2507 4.0 25.1

It is trained on 400k training trajectories using the lightweight mini-swe-agent scaffolding and the SWE-smith dataset of GitHub issues.

Unlike existing agentic SWE models, the mini-coder models can be post-trained on a single 80GB GPU—or smaller. They work seamlessly with mini-swe-agent, a lightweight, scalable, and developer-friendly agentic framework well-suited for RL fine-tuning. And because they are dense rather than MoE models, they benefit from a more mature fine-tuning ecosystem.

Example usage: Generating SWE-bench trajectories with mini-swe-agent and vLLM

This example shows how to generate SWE-bench trajectories using mini-swe-agent as the agentic scaffolding (recommended) and vLLM as the local inference engine.

First, launch a vLLM server with your chosen model. For example:

vllm serve ricdomolm/mini-coder-1.7b &

By default, the server will be available at http://localhost:8000.

Second, edit the mini-swe-agent SWE-bench config file located in src/minisweagent/config/extra/swebench.yaml to include your local vLLM model:

model:
  model_name: "hosted_vllm/ricdomolm/mini-coder-1.7b"  # or hosted_vllm/path/to/local/model
  model_kwargs:
    api_base: "http://localhost:8000/v1"  # adjust if using a non-default port/address

Create a litellm registry.json file:

cat > registry.json <<'EOF'
{
  "ricdomolm/mini-coder-1.7b": {
    "max_tokens": 40960,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "hosted_vllm",
    "mode": "chat"
  }
}
EOF

Now you’re ready to generate trajectories! Let's solve the django__django-11099 instance of SWE-bench Verified:

LITELLM_MODEL_REGISTRY_PATH=registry.json mini-extra swebench --output test/ --subset verified --split test --filter '^(django__django-11099)$'

You should now see the generated trajectory in the test/ directory.

Downloads last month
820
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ricdomolm/mini-coder-1.7b

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(308)
this model
Quantizations
2 models

Dataset used to train ricdomolm/mini-coder-1.7b

Collection including ricdomolm/mini-coder-1.7b