"Gödel, Escher, Bach" is one of my favorite books, and I have a tremendous amount of respect and admiration for Hofstadter... so I'm really disappointed and saddened to read that he (quoting from the article) "hasn't been to an artificial-intelligence conference in 30 years. 'There's no communication between me and these people,' he says of his AI peers. 'None. Zero. I don't want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they’re almost not colleagues -- we can't speak to each other.'"
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.
Hofstadter and the rest of the field, Jeff Hawkins for example, are collaborating, just indirectly. The "Pentti Kanerva" described in the article as an "old friend of Hofstadter" and his first wife, Carol, was the originator of the "sparse distributed memory" idea that appears to be what NuPIC is based on and, I believe, either co-founded Redwood Neuroscience / Numenta with Hawkins or soon joined it. (I'm sorry, I'm fuzzy on the details.)
Researchers such as Hawkins are well aware of Hofstadter's ideas, and Hofstadter's grad students take his ideas out into the world of AI research with no real need for Hofstadter himself to personally attend conferences. Every one of them would love to use any idea that has been overlooked by the rest of the field to make a name for himself/herself with some career-making breakthrough that can do what humans can do but other AI systems can't.
Hofstadter himself spoke here at Stanford a few years ago to a standing room only audience. I don't dispute the notion that mindsets and political agendas can delay the acceptance (or work on, or resources to) a good idea for years, but anything of use in his work will eventually be put to use. He can keep doing what he's doing, and brainstorming with his grad students, and anything useful they find will be disseminated.
I agree. While reading the article I can't help but, sort of, empathize with modern AI programs. Me and Watson are very similar, Watson can win Jeopardy but has no understanding why, I can recognize a handwritten 'a' and I too have no understanding why.
When I look at my daughter developing, from baby to infant to child. Hasn't that been a constant, intensive training? As she recognizes stuff, I give feedback. After a while she starts correlating stuff, and signals for me to give feedback. By the time she's an adult, she will have full control of her intelligence, but also no understanding.
Maybe what we are missing is just the algorithm for information storage and retrieval. If we can master Genetic Algorithms, why not Celular Databases? Or Chemical Procedures?
> Me and Watson are very similar, Watson can win Jeopardy but has no understanding why, I can recognize a handwritten 'a' and I too have no understanding why.
So, you and Watson are "very similar" just because both systems don't have a perfect understanding of themselves? You don't know that. Your premises look true, but your conclusion don't follow from them (or at all). Actually you probably know that no matter how you spin it, you and Watson are very different.
So don't say you aren't, it's misleading. Not only to others, but to yourself as well. Try to find a meaningful similarity instead.
I find myself doing poor pattern recognition at times (eg always choosing the wrong key for a particular door), and realizing just after that a machine learning library could well make the mistake I just did. This isn't a new insight, but it still feels like an epiphany when you realize it as it happens.
I am sympathetic to your view, but may I offer you a different viewpoint, at my own expense?
Truly original ideas are fragile and delicate. They require careful nurturing and devoted protection if they are to eventually flower.
It may be that Hofstadter sees far more deeply than I, and approaches like NuPIC and deep belief networks that seem different to me and therefore in need of synthesis are to him transparently isomorphic and dead ends. The effort it would take him to make me understand why this is so would cost him precious time and progress on his true path.
I think you are making a mistake in assuming that you know how another person, who presumably is much more qualified at this specific domain, should spend their time to be the most productive in that domain. That's kind of like telling Elon Musk that he should definitely use GTD - and if he doesn't he's doing something wrong - cause he'd be so much more productive.
It is an interesting question why he doesn't want to collaborate with other people, but he is far from alone in that m.o. and he kind of answered it.
It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
I'm still learning AI (my training and dollar-paying job is in chemistry; I am really drawn to Hofstadter's "thinkodynamics" analogy). I think there's something to what you say. I'm playing around with the idea that a perceptron can be used to produce low-level sensory input to the analogy-crafting machinery that Hofstadter outlines in "Creative Analogies" - which, if you haven't read, you should read.
During my free time (which has been lacking of late, hence fewer commits), I'm playing around with something of a manifesto towards this idea of merging Hofstadter's concepts with contemporary AI: https://github.com/ityonemo/positronicbrain
As far I understand Hofstadter's approach, it has involved something of a call for a synthesis for a long time.
But your argument is sort of doubly ridiculous given that's confusing whatever personal loggerheads Hofstadter and researchers are at with what approach they are pursuing and then confusing that with what approach would actually work.
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.