File size: 605 Bytes
d0d64da
 
60e3959
 
 
d0d64da
 
8ff39e5
 
29716f2
d0d64da
8ff39e5
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
library_name: transformers
base_model:
- Qwen/Qwen3-VL-2B-Instruct
pipeline_tag: text-generation
---

<center><img style="height:100px" src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F66d78facde54fea8a009927e%2Fn2pVs6-SPT0XsAyZPLKOY.png%26quot%3B%3C%2Fspan%3E%26gt%3B%3C%2Fspan%3E%3C%2Fspan%3E%3Cspan class="language-xml"></center>

# Qwen3-VLTO-1.7B-Instruct

Qwen3-VL-2B-Instruct but without the vision components (**V**ision **L**anguage **T**ext **O**nly). Functions exactly like a text-only Qwen3 model.

To do this, I simply imported the weights from the VL model into the text model via PyTorch's `load_state_dict`. The model architecture is essentially the exact same.