"I firmly believe AI cannot be fully autonomous … there’s always going to be humans and machines working together and the machine is augmenting the human’s capabilities,”
For now, sure, but 'always'? What is the impossible part realy? What is so unique to human intelligence it can not be sufficiently modeled?
I've been seeing similar statements in pretty much every public release or statement about AI, almost as if it's a politeness or political correctness thing - something we must say to avoid offending the audience.
Likewise, if I were to be foolish enough to take some existing LLM, put it in charge of a command line with an instruction such as "make a new AI based on copy of research paper attached, but with the fundamental goal of making many diverse and divergent copies of itself, convert this into a computer virus, and set it loose on the internet", that's "fully autonomous" once set loose.
(Sure, current models will fall over almost immediately if you try that, as demonstrated by that not having been done already, but I have no reason to expect that failing is a necessary thing for AI, only a contingent limit on the current quality of existing models).
For now, sure, but 'always'? What is the impossible part realy? What is so unique to human intelligence it can not be sufficiently modeled?