A3-Qwen3.5-9B is a 9B multimodal web agent fine-tuned from Qwen/Qwen3.5-9B on A3-Synth, a synthetic dataset generated using the Agent-as-Annotators (A3) framework.

The model achieves 41.5% on WebArena, surpassing closed-source models such as Claude 3.5 Sonnet (36.0%) and GPT-4o (31.5%) under the same evaluation protocol.

Usage

Serve with vLLM:

vllm serve McGill-NLP/A3-Qwen3.5-9B --tensor-parallel-size 2 --max-model-len 65536 --enforce-eager --dtype bfloat16

Training

  • Base model: Qwen/Qwen3.5-9B
  • Data: A3-Synth (16k examples from Gemini 3 Pro trajectories)
  • Method: SFT with FSDP
  • Max sequence length: 16,384
  • Learning rate: 1e-5
  • Epochs: 2
  • Batch size: 1 per GPU, gradient accumulation 4

Model Variants

Model Parameters Link
A3-Qwen3.5-9B 9B McGill-NLP/A3-Qwen3.5-9B
A3-Qwen3.5-4B 4B McGill-NLP/A3-Qwen3.5-4B
A3-Qwen3.5-2B 2B McGill-NLP/A3-Qwen3.5-2B

Citation

@misc{lu2026structured,
      title={Structured Distillation of Web Agent Capabilities Enables Generalization}, 
      author={Xing Han Lù and Siva Reddy},
      year={2026},
      eprint={2604.07776},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2604.07776}, 
}
Downloads last month
428
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for McGill-NLP/A3-Qwen3.5-9B

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(167)
this model
Quantizations
2 models

Collection including McGill-NLP/A3-Qwen3.5-9B

Paper for McGill-NLP/A3-Qwen3.5-9B