Come on, don't you see that the capability to understand the physical world that sora demonstrates is exactly what we need to develop those household robots? All these genAI products are just toys because they are technology demonstrators. They're all steps in the way to AGI and androids.
sensorimotor control is imo not at all the bottleneck. Teleoperated androids could do lots of useful things right now, but the AI is lacking to automate them.
well I let's say you want to make the coffee and we split that task into roughly two subtasks. The first is to imagine what motions are necessary to do that. How does the coffee cup have to move, how does your hand have to move to grasp it, etc. The second part is to find a way to use your muscles or actuators to execute those imagined actions.
I claim that the first part is the more difficult one and where we have the bottleneck currently. Furthermore, generative video AI is exactly the kind of thing that would give a model an understanding of what kinds of things have to happen in order to make coffee.