Adding `transformers` as the library name
#14 opened 10 days ago
by
ariG23498
How much vram is needed to run this model? 8xRTX3090=192GB isn't enough to run the context.
#12 opened 24 days ago
by
kq
Output messy code with demo code.
#11 opened 26 days ago
by
kk3dmax
No output_router_logits / load_balancing_loss_func for Qwen3VLMoE?
#10 opened 27 days ago
by
plcedoz38
π Best Practices for Evaluating the Qwen3-VL Model
#9 opened 28 days ago
by
Yunxz
Adding Offline and Online inference via vLLM Code
#8 opened 28 days ago
by
hrithiksagar-tih
FP8/4bit version please
β
4
5
#7 opened 29 days ago
by
zhanghx0905
π₯π₯π₯δΈζζ΅θ―θ§ι’
#6 opened 30 days ago
by
leo009
32b version?
β
9
1
#5 opened 30 days ago
by
sanak
Adding `transformers` library tag.
#3 opened 30 days ago
by
ariG23498
Citation section lacks Qwen3 VL specific citation
π
1
#1 opened about 1 month ago
by
jaxchang