👁️ LFM2-VL Collection LFM2-VL is our first series of vision-language models, designed for on-device deployment. • 10 items • Updated 5 days ago • 58
InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy Paper • 2510.13778 • Published Oct 15 • 16
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction_state Robotics • Updated Sep 22 • 3 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_goal_wrist-image_aug Robotics • Updated Sep 18 • 12 • 1
ShuaiYang03/instructvla_finetune_v2_xlora_freeze_head_instruction_state Robotics • Updated Sep 22 • 3 • 1
ShuaiYang03/instructvla_pretraining_v2_libero_goal_wrist-image_aug Robotics • Updated Sep 18 • 12 • 1
InstructVLA Collection Paper, Data and Checkpoints for ``InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation'' • 14 items • Updated Sep 17 • 1