Certainly in the 1950s, most automatic control systems must have seemed magical, it was to Nobert Wiener, even if they were "just" an evolution of the steam governor.
In the end, it depends on what you qualify as intelligence.
No, it absolutely is not. Everyone I spoke with in the 90s (and myself) still have the same requirements: be able to make sense of the 3D material world around you and interact with it without harming humans or valuable possessions.
Maybe you spent too much time with critics that can never be satisfied. Sucks to be you if that's the case but don't think for a second that they represent most of humanity. Most of us want to spend less time cleaning or doing stuff in the kitchen, and us the programmers in particular would just be grateful for faster compilers and a bit more declarative programming that generates actual code.
Of course, I love the spoils of technology and automation as much as anyone else.
But it is absolutely human nature to get used to things and not consider them magical anymore, things simply become the new norm. This is exactly what happened with feedback controllers like airplane autopilot systems (1912 - https://en.wikipedia.org/wiki/Autopilot)
I've worked for a number of years on industrial robotics, which sounds very similar to your definition of "AI": real physical machines that take data from various sensors, including spatial sensors, make sense of them in real time, and decide how to optimally interact with the physical environment, with safety critical systems in mind. I hardly think about such systems as AI, more simply engineering, math, and a lot of coding.
Hmmm. But I really didn't mean robots with mostly hardcoded requirements and a few excellent optical recognition algorithms (which might truly be the real boss here).
I actually do mean a walking robot that can find the exact knife it needs to fillet a fish, regardless of whether it's in the sink, in its proper place on a stand, on the table, or dropped on the floor (and if it finds it on the floor it will wash it before usage first). I mean a robot that can open the fridge and find the tomatoes, regardless if your brother has moved them to the top shelf or if they are where they should be. Etc.
> I actually do mean a walking robot that can find the exact knife it needs to fillet a fish, regardless of whether it's in the sink, in its proper place on a stand, on the table, or dropped on the floor (and if it finds it on the floor it will wash it before usage first). I mean a robot that can open the fridge and find the tomatoes, regardless if your brother has moved them to the top shelf or if they are where they should be. Etc.
From a computer vision perspective, I think most of that is fairly easy, maybe not getting all of the edge cases right, but it's more or less what this generation of machine learning enables (over "classical" computer vision).
What's hard about the scenario you proposed is probably on the robotics side, you would need a revolution in soft robotics:
> From a computer vision perspective, I think most of that is fairly easy
No, not at all. The hard part is to do computer vision to understand what you can do with objects, not just convert object representation of images into string texts. For example, can I move this object away to get an object under it? Can I place an object on this object without it toppling over? Is this object dirty and moving it will smear things all over? If I topple this object, will it break? How much pressure can I apply to this object without breaking?
Those things are necessary to have an intelligent agent act in a room, every animal can do it, and our software models are nowhere near good enough to solve them.
You need the computer vision program to also have a physics component so it can identify strain, viscosity, stability etc on objects, and not just a string lookup table it needs to understand that naturally like humans as the same object category can have vastly different properties identified by looks.
I agree with all that (though I don't really necessarily associate any of that with computer vision, perhaps it's my physics bias).
Having good state estimation, kinematic, and dynamics models of real world objects is something that is very mature in controlled environments, but not very mature in other environments.
I think you might over-estimate what the current computer vision models are capable of. They are very good at recognizing what class an object belongs to, but they aren't very good at extracting data about the object based on the image. Extracting data of objects from images is image recognition, and humans and most animals relies heavily on vision to get data about objects.
Hm? Do we have robots that can pick eggs without squishing them, for example? I vaguely remember reading something like this years ago and it was impressive.
My point is that the corporations seems to just want to get the lowest possible hanging fruit with which they can reap the maximum profit... and stop there. I am not seeing any efforts to make an acceptable actual home robot.
I think like you: that most of the problems (that the bigger problem is comprised of) are solvable, but it seems that nobody is even trying to put the effort to make the entire package.
The film 2001 came out over 50 years ago, and I think HAL is a pretty common reference point for a "what is 'real AI'?" target. Until we have HAL and people are saying that it's not AI, I don't think the target is moving. ;) At least as far as "chatbots" go.
Alternately, you've got the also-very-old Turing Test as your chatbot target.
The models are just larger now.