👁️ LFM2-VL Collection LFM2-VL is our first series of vision-language models, designed for on-device deployment. • 10 items • Updated 5 days ago • 61
InstructVLA Collection Paper, Data and Checkpoints for ``InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation'' • 14 items • Updated Sep 17, 2025 • 1
InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation Paper • 2507.17520 • Published Jul 23, 2025 • 14