It turns out you can take a vision language foundational model that has a broad understanding of visual and textual knowledge and fine tune it to output robot actions given a sequence of images and previous actions.
This approach beats all previous methods by a wide margin and transfers across tasks.
https://arxiv.org/abs/2406.09246
It turns out you can take a vision language foundational model that has a broad understanding of visual and textual knowledge and fine tune it to output robot actions given a sequence of images and previous actions.
This approach beats all previous methods by a wide margin and transfers across tasks.