Can it run on the CPU after 4-bit quantization? What is the speed?
#6
by
likewendy
- opened
Are there plans to release smaller models?
π
π
Try LFM2-VL It's small and good enough.
π
Try LFM2-VL It's small and good enough.
I took a quick look. It's certainly small enough, about the same size as the smolVLM.
I'm just not sure about the fine-tuning ecosystem and benchmarks.
I've never seen this model, haha.