>For example, I work in aging and, to a lesser extent, neuroscience. I have had many private conversations with researchers who will candidly say that the long-term goal of their work is lifespan extension or cognitive enhancement. Yet, for the purposes of grant and paper-writing, you'd never write such a thing; it would sound too speculative.
I think it was Scientific American or another such magazine that ran a story, maybe a month and a half ago, in which they interviewed a whole bunch of real theoretical and computational neuroscientists about mind uploading. A not-that-small portion of these admitted that when they are in private, have had a drink or two, and are ready to admit things they would never say in a grant application, they are definitely trying to achieve whole-brain emulation.
Hell, it's basically the implicit point of the Human Brain Project, or whatever those initiatives are called by the US and EU governments this past year.
> You are right that you won't see many researchers publicly talking about human-level AI, or framing their research in terms of the search for general AI. But that doesn't mean the field is "dead".
Indeed. What instead happened is that actual humanlike cognition has been steadily broken down into subproblems and each subproblem is being formalized and worked-on separately. There are a few papers on optimal utility-maximizing agents for general environments, but those tend to require "until the stars burn out" levels of FLOPs to work anyway.
So actually, AI is going to work, but by the time it does, it won't be called AI. In fact, the researchers who'll create it would look at you funny for calling it AI. They do and will have lots of academic jargon used for specifying exactly what their algorithms do and how they do it, so much so that only when their algorithm kills all humans will anyone actually admit it was artificial intelligence ;-).
I think it was Scientific American or another such
magazine that ran a story, [...] real theoretical and
computational neuroscientists [...] admit things they
would never say in a grant application, they are
definitely trying to achieve whole-brain emulation.
Another possibility is they're actually working on the things their grant applications say, but when pushed and offered drinks by journalists looking for a story, they'll extrapolate their research far enough to reach something cool-sounding.
For example, if I'm researching image correlation techniques, which have applications in machine vision, which has applications in obstacle detection and tracking, which has applications in self driving cars, I could tell a journalist that I'm working on image processing techniques with potential applications in self driving cars.
Is that me admitting things I would never say in a grant application, that I'm trying to achieve self-driving cars? Or am I working on exactly what my grant application says, but I've simplified it and added context because I know Scientific American isn't going to be publishing articles about efficient convolving and fast fourier transforms any time soon?
Frankly, I think the real answer is "both", but that might just be my optimism about the... let's call it soulfulness of most scientists. To me it seems as though one would be nuts to keep up in academic science just for the sake of the boring, tiny, insignificant bullcrap one writes in grant applications to sound impressive; to me it seems as if a scientist must have some actual dream burning in him that keeps him going.
On the other hand, my real-world observations say that for many scientists, the burning dream heating their blood and getting them up in the morning to deal with all the bullshit is... careerist ego-stroking.
> they are definitely trying to achieve whole-brain emulation
Of course they are trying.
They were also trying in 1956: "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer."
The history of AI research is full of totally unfounded optimism that has failed every time. (I mean the kind of research that promises significant advances towards human level AI. Of course AI research has produced methods that have useful applications.)
I find it strange that you think AI and WBE are the same thing. Putting your claims about machine cognition (which can then be separated into various cognitive faculties) aside, you really think that emulating human cognition on a separate substrate is outright impossible? Then what's the seemingly "magical" property of our meat substrate that forbids outright, or forbids via excessive expense, using any other substrate ever?
> you really think that emulating human cognition on a separate substrate is outright impossible?
Absolutely not.
But I think it's too far in the future, that it is totally pointless for us, in 2014, to worry about it.
I view the singularity crowd as a bunch of medieval alchemists, after they have seen gunpowder, worrying about mutually assured destruction of the whole world, after someone scales their firecrackers to a size capable of destroying whole countries.
Yes, coming to 1950's we did reach nuclear weapons with end-the-world capability.
But it would have been quite pointless for (al)chemist, or philosophers, in the 1400's, or even in the 1700's, to start worrying about this threat.
Ah, so you think scientific progress proceeds not merely at a constant rate, but at a constantly slowing rate, thus causing the rate of new discoveries to be roughly constant over the centuries, despite our constantly adding more scientists, and publishing more papers, based on increasingly solid foundations.
> so you think scientific progress proceeds not merely at a constant rate, but at a constantly slowing rate
No, I don't think so.
I just think the way looks very long, even given our current acceleration rate. Plus, this field has a very strong track record of blatant overestimations.
> I find it strange that you think AI and WBE are the same thing.
I apologize but being ambiguous. I don't think they are the same. But I see the same totally unfounded optimism in both the people who expect AGI to happen relatively soon, and in the people who expect WBE to happen relatively soon.
That's basically where my theory about all this comes from. We keep finding that more and more tasks that "clearly" require a whole human-level mind... can actually be done quite well with Narrow AI/ML techniques and crap-tons of training data. We consistently overestimate the Kolmogorov complexity of human cognitive tasks in order to flatter ourselves, thinking that surely no computer can do XYZ unless we're within six months of a capital-S Singularity.
Gaaah, I'm trying to remember the link to some short story that Someone (we all know who) wrote about a man fretting over his JRPG party members seeming to be conscious. The author's afterword said that he expected seemingly-conscious video game characters to show up "six months before the Singularity". I currently expect seemingly conscious video-game characters - which are actually just very stylish fakes with really good NLP - many decades before anyone manages to produce a self-improving agent.
You could say the same thing about "nanotechnology" - which in practice is materials scientists getting money for chemistry when they know damn well that using the term is implicitly promising magical tiny robots to the gullible.
I think it was Scientific American or another such magazine that ran a story, maybe a month and a half ago, in which they interviewed a whole bunch of real theoretical and computational neuroscientists about mind uploading. A not-that-small portion of these admitted that when they are in private, have had a drink or two, and are ready to admit things they would never say in a grant application, they are definitely trying to achieve whole-brain emulation.
Hell, it's basically the implicit point of the Human Brain Project, or whatever those initiatives are called by the US and EU governments this past year.
> You are right that you won't see many researchers publicly talking about human-level AI, or framing their research in terms of the search for general AI. But that doesn't mean the field is "dead".
Indeed. What instead happened is that actual humanlike cognition has been steadily broken down into subproblems and each subproblem is being formalized and worked-on separately. There are a few papers on optimal utility-maximizing agents for general environments, but those tend to require "until the stars burn out" levels of FLOPs to work anyway.
So actually, AI is going to work, but by the time it does, it won't be called AI. In fact, the researchers who'll create it would look at you funny for calling it AI. They do and will have lots of academic jargon used for specifying exactly what their algorithms do and how they do it, so much so that only when their algorithm kills all humans will anyone actually admit it was artificial intelligence ;-).