metadata
library_name: transformers
base_model:
- Qwen/Qwen3-VL-2B-Instruct
pipeline_tag: text-generation

Qwen3-VLTO-1.7B-Instruct
Qwen3-VL-2B-Instruct but without the vision components (Vision Language Text Only). Functions exactly like a text-only Qwen3 model.
To do this, I simply imported the weights from the VL model into the text model via PyTorch's load_state_dict. The model architecture is essentially the exact same.