hmm. i have to disagree. driving a car is a walk in the park compared to AGI. they’re not even in the same ballpark. for driving we at least have an idea how to make it happen and we need better tech, more training data and maybe alter the infrastructure to solve some edge cases. AGI? we’re guessing at this point (at least what’s public)
In the average situation, not the edge-case. The way you should think about it is, "If I had a black-box oracle that can drive a car exactly like a human, could I use that to simulate an artificial general intelligence?"
The answer is probably yes. For example, in order to "ask" the AGI a yes-or-no question X, you could contrive that the car find itself at a fork in the road with a roadsign that says "If the answer to X is 'yes', then the road to the right is closed. Otherwise, the road to the left is closed."
what does drive a car exactly like a human mean?
i am going to assert that you don't need AGI for self-driving cars. in your example: that's not how driving works. the driver - even a human one - is not expected to answer random questions as they are driving.
>the driver - even a human one - is not expected to answer random questions as they are driving.
In the same way, the C++ compiler was not expected to be able to emulate arbitrary programs (i.e. to be Turing complete), but some ingenious people found a way to use it to do just that. The way they did it was by writing some extremely unusual and edge-casey code, but you can't just wave that aside and say "C++ isn't really Turing complete, because the proof that it's Turing complete involves C++ code that nobody would really write!"
We don't know how to do that. Like at all.