GraspVLA: The First End-to-End Embodied Grasping Model by Galbot

Galbot(银河通用)has just dropped GraspVLA, the world’s first end-to-end embodied grasping model! Trained on over a billion “visual-language-action” pairs, GraspVLA is ready to handle a wide range of tasks, even the unexpected ones. And with a little post-training magic, it adapts quickly with just a few real-world examples. Check out a few of its performances below! Source

Lighting? No problem—GraspVLA grips with confidence even in complete darkness.

Backgrounds? Whether it’s a shiny surface or a mess of textures, this model stays on target.

Height variations? It manages objects at varying heights without issues.

Action strategy? It’s like it has a sixth sense—adjusts its strategy automatically.

Interference? Distractions won’t shake its focus. Tasks get done.

Unknown objects? No worries, GraspVLA can handle the unfamiliar with ease.

GraspVLA’s ability to adapt to different scenarios is seriously next-level.

Tuo Liu Avatar

Posted by

Leave a Reply

Discover more from Robotuo

Subscribe now to keep reading and get access to the full archive.

Continue reading