Hacker News new | past | comments | ask | show | jobs | submit login

VLA - Vision Language Action models

https://arxiv.org/abs/2406.09246

It turns out you can take a vision language foundational model that has a broad understanding of visual and textual knowledge and fine tune it to output robot actions given a sequence of images and previous actions.

This approach beats all previous methods by a wide margin and transfers across tasks.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: