"Gödel, Escher, Bach" is one of my favorite books, and I have a tremendous amount of respect and admiration for Hofstadter... so I'm really disappointed and saddened to read that he (quoting from the article) "hasn't been to an artificial-intelligence conference in 30 years. 'There's no communication between me and these people,' he says of his AI peers. 'None. Zero. I don't want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they’re almost not colleagues -- we can't speak to each other.'"
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.
Hofstadter and the rest of the field, Jeff Hawkins for example, are collaborating, just indirectly. The "Pentti Kanerva" described in the article as an "old friend of Hofstadter" and his first wife, Carol, was the originator of the "sparse distributed memory" idea that appears to be what NuPIC is based on and, I believe, either co-founded Redwood Neuroscience / Numenta with Hawkins or soon joined it. (I'm sorry, I'm fuzzy on the details.)
Researchers such as Hawkins are well aware of Hofstadter's ideas, and Hofstadter's grad students take his ideas out into the world of AI research with no real need for Hofstadter himself to personally attend conferences. Every one of them would love to use any idea that has been overlooked by the rest of the field to make a name for himself/herself with some career-making breakthrough that can do what humans can do but other AI systems can't.
Hofstadter himself spoke here at Stanford a few years ago to a standing room only audience. I don't dispute the notion that mindsets and political agendas can delay the acceptance (or work on, or resources to) a good idea for years, but anything of use in his work will eventually be put to use. He can keep doing what he's doing, and brainstorming with his grad students, and anything useful they find will be disseminated.
I agree. While reading the article I can't help but, sort of, empathize with modern AI programs. Me and Watson are very similar, Watson can win Jeopardy but has no understanding why, I can recognize a handwritten 'a' and I too have no understanding why.
When I look at my daughter developing, from baby to infant to child. Hasn't that been a constant, intensive training? As she recognizes stuff, I give feedback. After a while she starts correlating stuff, and signals for me to give feedback. By the time she's an adult, she will have full control of her intelligence, but also no understanding.
Maybe what we are missing is just the algorithm for information storage and retrieval. If we can master Genetic Algorithms, why not Celular Databases? Or Chemical Procedures?
> Me and Watson are very similar, Watson can win Jeopardy but has no understanding why, I can recognize a handwritten 'a' and I too have no understanding why.
So, you and Watson are "very similar" just because both systems don't have a perfect understanding of themselves? You don't know that. Your premises look true, but your conclusion don't follow from them (or at all). Actually you probably know that no matter how you spin it, you and Watson are very different.
So don't say you aren't, it's misleading. Not only to others, but to yourself as well. Try to find a meaningful similarity instead.
I find myself doing poor pattern recognition at times (eg always choosing the wrong key for a particular door), and realizing just after that a machine learning library could well make the mistake I just did. This isn't a new insight, but it still feels like an epiphany when you realize it as it happens.
I am sympathetic to your view, but may I offer you a different viewpoint, at my own expense?
Truly original ideas are fragile and delicate. They require careful nurturing and devoted protection if they are to eventually flower.
It may be that Hofstadter sees far more deeply than I, and approaches like NuPIC and deep belief networks that seem different to me and therefore in need of synthesis are to him transparently isomorphic and dead ends. The effort it would take him to make me understand why this is so would cost him precious time and progress on his true path.
I think you are making a mistake in assuming that you know how another person, who presumably is much more qualified at this specific domain, should spend their time to be the most productive in that domain. That's kind of like telling Elon Musk that he should definitely use GTD - and if he doesn't he's doing something wrong - cause he'd be so much more productive.
It is an interesting question why he doesn't want to collaborate with other people, but he is far from alone in that m.o. and he kind of answered it.
It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
I'm still learning AI (my training and dollar-paying job is in chemistry; I am really drawn to Hofstadter's "thinkodynamics" analogy). I think there's something to what you say. I'm playing around with the idea that a perceptron can be used to produce low-level sensory input to the analogy-crafting machinery that Hofstadter outlines in "Creative Analogies" - which, if you haven't read, you should read.
During my free time (which has been lacking of late, hence fewer commits), I'm playing around with something of a manifesto towards this idea of merging Hofstadter's concepts with contemporary AI: https://github.com/ityonemo/positronicbrain
As far I understand Hofstadter's approach, it has involved something of a call for a synthesis for a long time.
But your argument is sort of doubly ridiculous given that's confusing whatever personal loggerheads Hofstadter and researchers are at with what approach they are pursuing and then confusing that with what approach would actually work.
So Good Old Fashioned AI[1] is the new hot underdog AI thing now? I seriously don't understand the praise of Hostadter in the article and in the comments here, and the criticism of the mainstream AI research, especially it is very hard to find any precise details of what he does and what are the outcomes.
There have been attempts to understand intelligence with intelligence (logic, symbols, reasoning etc.) for 30 years, to not much effect, now AI and machine learning are advancing quite steadily, so why the snark? All evidence suggests that the way the brain itself learns things is statistical and probabilistic in nature. There are also new disciplines now, like Probabilistic Graphical Models, which are free of some of the traditional downsides of purely statistical methods, in that they can be interpreted and that human-understandable knowledge can be extracted from them. This is something that really seems promising, and to some extent is an union of the old and new approaches, despite the claims of a big division, but it is hard to see much premise in purely symbolic methods invented merely by some guy somewhere thinking very hard.
I for one am very happy that people seek inspiration in the way human brain works, that's what science is, if you just come up with things without consulting the real world it's not science, it's philosophy, the one discipline that has yet to produce a single result.
There are more divisions than the article can get into, of course. I think the main one is between AI to solve engineering problems and AI to better understand the mind. The article is about doing the latter when everyone else's definition of AI is the former.
What you're talking about, I think, is various approaches within the latter group of researchers. I defended Hofstadter in another reply because I find his goals worthwhile in and of themselves - in a "basic research" sense. Discarding anything that's not an optimal solution - the attitude taken by a couple of responses here - ignores a whole lot of interesting science, and as a scientist it bothers me quite a bit.
That said, once we're talking about the goal of understanding the human mind, GOFAI is, to be sure, incredibly old-fashioned, and you won't find me defending Hofstadter anymore as far as his approach. His goal is a worthwhile one, but you're right that his approach shouldn't really be considered an 'underdog'.
Personally, I think the best hope for understanding what intelligence is, in a general sense, comes from non-equilibrium thermodynamics, as in the sort of research goin on here:
http://www.tandfonline.com/toc/heco20/24/1#.UmlRN_mfihM
but that's a can of worms for another post.
As an aside, regarding your last comment, I completely disagree on your view of philsoophy. It may not have produced results but it has guided science. But I agree with the thrust of your sentiment: an AI researcher with the goal of understanding the human mind should be spending as much time studying humans (as in, doing Psych or Cognitive Science experiments) as programming AIs.
> it's philosophy, the one discipline that has yet to produce a single result.
I agree that the signal-to-noise ratio in philosophy is very low. (I also strongly agree with the rest of your comment.) But let's be fair: it was philosophy that produced
1. I suppose the earliest system of formal logic was the syllogistic, but that's a long way from what we call formal logic now and it's not at all clear that it ever did anyone any good. Formal logic of the modern kind has a history going something like: Leibniz (mathematician), Boole (mathematician), Frege (both mathematician and philosopher), Peano (mathematician), Russell (both mathematician and philosopher), etc. (by Russell's time most of the architecture of modern formal logic is in place) and it looks to me as if -- if we really must engage in these boundary disputes -- it's more down to mathematicians than to philosophers.
2. Yeah, William of Ockham was a philosopher. Score one for philosophy.
3. It looks to me as if almost everything important in the history of the scientific method is down to scientists rather than philosophers -- though, since the word "scientist" wasn't coined until the 19th century and disciplinary boundaries used to be more porous than they are now, they were often called "natural philosophers" and often did a certain amount of what-we-now-call-philosophy as well as what-we-now-call-science.
Francis Bacon is the usual chief suspect for introducing something close to the modern scientific method. He was an experimental scientist as well as a philosopher (though, it seems, not a particularly good one). Galileo was a scientist. Newton was a scientist. I suppose you might want to go back to Aristotle (though I wouldn't) -- but, actually, Aristotle was trying to do science as well as philosophy.
4. Since both Church and Turing were both trained and employed as mathematicians, it seems rather strange to credit the Church-Turing thesis to philosophy. (So far as I can tell, all the other important people in its history -- Goedel, Kleene, Post, Rosser, etc. -- were mathematicians too.)
You might consider these topics philosophical by definition. If so, the conclusion would seem to be that even philosophy is often best done by scientists and mathematicians, which doesn't speak well for philosophy as a discipline.
Leibniz was top notch in many, many things. Philosophy, math, linguist, lawyer, diplomat, engineer, psychology and sociology... lots of things. He was as particularly bad way to start your list which is supposed to support an argument that it was mathematicians who developed formal logic, since he shatters entirely the distinction you are trying to draw upon.
> since he shatters entirely the distinction you are trying to draw upon.
I think you have misunderstood my argument, which was not that formal logic was developed only by mathematicians with no philosophers involved (that would be nuts) but that saying it was done by philosophers as opposed to anyone else is quite wrong. And that argument would go through just the same even if we classified Leibniz exclusively as a philosopher (which would be just as wrong as classifying him exclusively as a mathematician; my apologies, by the way, for being sloppy about that).
I have to admit I can't imagine how what I wrote turned (on its way into your mind) into an attempt to draw a sharp dichotomous distinction between mathematicians and philosophers, but evidently it did and I'm sorry that I evidently wasn't clear enough. Yes, people can be both mathematicians and philosophers, or both scientists and philosophers, or all three; yes, the boundaries are fuzzy sometimes; it was no part of my intention to imply otherwise.
Technically speaking, philosophy will never be able to produce any result, the moment it can, the philosophical problem turned into a scientific problem.
But answering isn't actually philosophy's job. It's mostly for producing question, and science will in turn answer those questions (or make it irrelevant).
Artificial Intelligence is like that too - it can't produce "intelligence" because as soon as scientists figure out a way to do stuff that looks like intelligent behavior - say, speech recognition or winning at Jeopardy - that immediately stops being considered an example of AI.
Chess was considered "an AI problem" - and quite a hard one - back when nobody knew how to write a program that could play a good game. Now chess is beneath consideration because (a) programs can play it, (b) we actually understand how those programs work.
I seriously don't understand the praise of Hofstadter in the article and in the comments here, and the criticism of the mainstream AI research, especially it is very hard to find any precise details of what he does and what are the outcomes.
Uh Yeah, you don't understand "Hostadter", you can't be bothered to spell his name right, but you're willing to attack him.
Just FYI, Hofstadter was never in the mainstream of GOFAI but always advocate something of a different path; a path not taken - for good or ill but not the same path as you caricature. You might take notice that your link makes no mention of Hofstadter, for example. Hofstadter's work on analogies, for example, was by no means a pure symbolic approach.
And the rest your post is thus ... ignorant and irrelevant [Edit: by the fact that it's not about Hofstadter, just someone's critique of someone else' AI grr]. Kind of an embarrassment having it here.
I will listen with great interest if you then care to explain to me what is the great discovery Hofstadter made in the field of AI.
I believe he is rooted in the GOFAI tradition, because he tries to understand intelligence by introspection, and I happen to believe this is impossible, since what we are aware of consciously is just a surface level result of processes in the brain we are completely unconscious of. Whatever descriptions of his work I can find, sound pretty much like GOFAI too - you just come up with a program that does something supposedly intelligent:
I am also not attacking him, I am just sceptical, as I would be if someone who is not working in mainstream physics and is ignored by it claimed a breakthrough in understanding matter or whatever.
I haven't checked any of his recent projects, but from his books he's firmly rooted in the symbolic tradition. He adds some interesting wrinkles, sure, but wouldn't he still be diametrically opposed to someone like Rodney Brooks?
http://www.sciencedirect.com/science/article/pii/00043702919...
(Intelligence Without Representation)
You're right, and flipping through my copy of I Am a Strange Loop for the first time in a while I find some things that are reminiscent of non-computational theories. I guess I was/am caught up on the computationalism vs. non-computationalism debate and am conflating GOFAI with everyone working in a computationalist framework. Hofstadter pushes the boundaries, but he's still a computationalist.
For someone who takes the idea of a system that's impossible to model further, IMO, check out the biologist Robert Rosen's Life Itself and Essays on Life Itself. He pushes more in the direction of complex systems theory than self-reference (maybe a physics version of the same idea?) but his writings are brilliant. Less fun than Hofstadter's, but more rigorous and with a necessary appreciation of the biological and physical sciences. Anyways, I'm rambling.
it's not just strange loops. Read "Creative Analogies". The 'copycat' program, in particular. Those algorithms were solving problems that watson never would be able to do: Problems that require genuinely creative thinking, especially in the face of open-ended, indefinite answer domains.
Norvig and co. are like drunk men searching for their lost key under a streetlight. It might not be where it lies, but that's the only place where they think think could find something, or at least make some tangible progress. Hofstadter doesn't mind taking the long shot... feeling his way about in the dark, in the hope of inching forward and making progress towards artificial intelligence.
This comparison between complementary approaches is an apt analogy for most fields, where the focus shifts every once in a while, when one of the approaches largely hits a wall and most people switch to the other one. A while later, the trends will almost inevitably reverse and draw inspiration from other approaches. The unfortunate thing is that there's no dialogue between the two camps, which makes it that much harder to port good ideas from one context to the other.
I could provide examples from physics research, or for that matter, trends in static-vs-dynamic blogs :P Also, the more "applied" the field, the shorter these cycles are.
Douglas Hofstadter is important because most AI work right now focuses either on (1) big-data-style statistical analysis or (2) emulating brain anatomy.
DH is the most well known guy of a small, stubborn group of AI developers who still believe that "human thought" can be reasoned about and can be understood in isolation, and that we can build intelligence without simply reducing it to statistics or to brain anatomy.
I applaud his efforts, and find some of the programs he's written both creative and refreshing.
I'm nobody compared to the quoted people, but I use AI in computer vision, and while I love using Bayesian math for parts of it, I've found no substitute for pragmatically programming in knowledge about the system. Which is different from what the article is talking about; I'm aware. I'm just trying to make a system that works, not understand how the brain works. But I view those as somewhat orthogonal problems ; my computer is structured differently than a human brain, perhaps my algorithms should be different as well. Certainly I see no reason to view the brain as having the best algorithm - perhaps a very non-human brain type of algorithm would be superior. Evolution worked with what it had, and it is unlikely that the result is optimal.
I think you are absolutely right. Evolution selected for a form of intelligence that was highly calibrated and biased for operating in the wilderness of Nature. Our minds are not necessary correctly geared for abstract thinking, logic, and rational thought.
Members of what you might call that "stubborn group" have started a yearly conference: Advances in Cognitive Systems. See the top of their FAQ: http://cogsys.org/faq
Nice group but they are definitely swimming against the currently standard approach.
That said, I think one really interest article here is: "Human-Level Artificial Intelligence Must Be an Extraordinary Science" by Nicholas L. Cassimatis
You cannot not reduce it to statistics, vast majority of your cognitive activity is statistical in nature, in the more general sense of the word. The actual high-level structured thoughts are a very thin layer on top of a huge statistical information processing powerhouse.
Yes, everything can theoretically be reduced to statistics. Next time I build a web app, I'll first design it from the ground up using a Turing machine ticker tape, since all computation can theoretically be reduced to a Turing machine. What's the point of bothering with JavaScript?
You need to use the right level of abstraction for the problem at hand. For modeling intelligence, abstracting away the probabilistic/statistical aspects of cognition greatly limits your options. Continuing the web development analogy, abstracting away statistics in cognitive science is like abstracting away JavaScript/HTML and using WYSIWYG GUI tools. With the same effect of only being applicable in toy problems and not scaling to real world scenarios :)
There's a huge middle ground between reducing cognition to statistics (as you seem to be proposing) and "abstracting away" the statistics. You seem to be accusing any approach that's not "statistics all the way down" of ignoring the statistical nature of the mind. When in fact, there are many approaches to cognition which make heavy uses of advances in statistical theory without reducing cognition to statistics.
What you're proposing is an overly reductionist ontology, IMO, and yet I agree that statistics are important, if not critical. (Hint: not the boring linear kind of statistics that you so often see in Big Data)
No I'm not arguing for a statistics-only scheme, just that you can't ignore it / abstract it away, and that its role is crucial in cognition. I think we agree with each other.
Well, you said "you cannot not reduce it [AI] to statistics". And I'm saying, not only is it possible to not reduce AI to statistics, it is desirable. It's the reduce part that I have a hard time with.
Possibly poor wording, my intent was to say that you cannot dismiss or underestimate statistics in modeling the mind because the damn thing is a sophisticated statistical information retrieval system (with some higher-level structure on top for doing things like debating mind modeling on HN).
What makes the right level of abstraction right? Seems like the units of abstraction have to be amenable to application (analogy), but rightness is fluid. If you have a turing machine, it might make more sense to write a JS compiler first because of the familiarity and perceived difficulty of translating your thoughts, but that's a statistical process of your own head. If you're a fresh mind, you would never create a JS compiler. The aggregated feedback of our social consciousness over time is what sets the 'rightness' of any approach by modulating the difficulty and expressiveness of each concept to each individual.
What makes it right is purely pragmatic and a function of our abilities and limitations, scientific and computational resources, time constraints, etc. We could in theory run quantum simulations for everything, given the resources, but we don't have them.
No one is arguing that they've found a domain that statistics can't be applied to or gathered from.
What is implied is that statistics should not be chosen as the preferred main engine of conciousness--because as far as we can tell conciousness is more than just a gestalt voting between the cells of the host it's housed in.
The issue being discussed is the choice of right level of abstraction and the right tools for modeling a given real-world phenomenon. My comment says that statistics is essential in modeling this particular phenomenon, based on my experience and intuition and that of many experts in the field. Responding with "who cares?" doesn't make sense to me. What part of my statement do you disagree with?
I don't disagree with any of it, and I think perhaps my non-chalant "Who cares?" somehow conveyed to you that I think something stated was false. I don't.
But, why waste time with this?
You cannot not reduce it to statistics, vast majority of your cognitive activity is statistical in nature, in the more general sense of the word.
Most here believe statistics are globally applicable, even in domains dealing with the odd, esoteric, or random. With that said..
- lumps hofstadter into statistical ai > "but hofstadter is non-conformist!"
- create seperate subgroup which hofstadter can exist within with his experiments
- "but you can't be non-statistical!"
... and on and on. Is this the kind of recursion geb was talking about?
A lot of us think we know statistics are universally applicable. It's a waste of time to state, and adds little to the discussion overall. Sorry that it came off so negatively, either before in my first post or now. I don't mean it that way.
GEB was a considerable waste of time and contributed nothing to my understanding of intelligence or AI. The time would have been be better spent elsewhere.
If you want to understand Godel's proofs then I recommend the book "Godel's Proof" by Ernest Nagel and James R. Newman:
GEB's purpose wasn't to provide a comprehensive understanding of Godel's proofs. Nor was it trying to explain AI. It was a very personal book of thinking about thinking, basically. If you aren't a native English speaker then the book might have been less effective.
I own the Nagel and Newman book and probably read it every two years or so.
I also own the FARG book which summarises the work of the Fluid Analogies group. I don't think these papers are as interesting or exhilarating as GEB so I have to disagree with you there.
I don't agree with your dismissal of the work, but this is a very constructive comment on the whole with many interesting references and should not have been downvoted.
I added the references later so that's the cause for down voting.
I didn't really dismiss the book: I read it attentively in it's entirety and, as anyone who has read it knows, that is a big book. But in the end I found nothing new or thought-provoking. Entertaining, yes; enlightening, no. "Where's the beef?" came to mind over and over as I moved through the text.
Hofstadter is certainly bright, has a voluminous memory and can be an entertaining writer but GEB is not IMO a contribution to AI. My expectations were undoubtedly too high.
This is a bad article, especially for a technical audience. It romanticizes things a lot, as journalists have to, to keep up the readership rates, but it doesn't make for a very balanced judgement. This kind of debate is going on and on, you can read a much more reasonable account here:
I find the analogy to Einstein at the end of article especially funny. I think it's much more likely that people will look upon current defenders of "good old fashioned AI" like they now do upon people who still looked for ether after Einstein's discoveries.
I think this was an exceptionally good article, because the writer has a far better understanding of Hofstadter's work than most journalists do of science topics (also better than most commenters in this thread, of Hofstadter's work).
In particular, how often do you see a journalist with this much insight?
> When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
100% agreed. GEB is a great book, but Hofstadter hasn't been relevant in this field for a while now (AFAIK). Good old fashioned AI approaches show what a giant blind spot our minds are for our species, how little introspection goes on even within the brightest minds. Modern approaches to AI/ML have yielded tangible results and moved the field in the right direction, dismissing those results makes anyone look ridiculous in 2013.
Pure ML approaches are doomed to end up in a tar pit of AI mysticism. A statistical learning model can and will conclude that the thunder gods make the crops grow. People like Douglas Hofstadter and Eliezer Yudkowsky will have to use classical AI approaches to train our AI children in science and rationality.
A statistical learning model concluding that 'the thunder gods make the crops grow' would not make it a failure. Indeed that's what people thought for thousands of years!
I think you're confusing good AI with being 'smarter than humans'.
An AI can still pass the Turing test if it believes in Scientology or any of the other nonsesne beliefs that humans have.
That's fine if you want an AI hunter-gatherer. To have an AI scientist or engineer you need a rational thinking layer on top of the ML layer, so it can create hypotheses and try to falsify them. I predict the rational thinking layer will be built with good old fashioned AI: symbol processing, theorem proving, etc. Getting this to talk to the fuzzy ML layer will be a challenge.
My uninformed guess is that they mean that a statistical AI system might come to similar conclusions as humans such as concluding that there exist thunder gods, while a non statistical type (one with just "reason") would not come to that conclusion about thunder gods, and as such would not as accurately model humans.
Simple models of correlation and causation often make bizarre predictions, like the famous conclusion that pirates cause global warming. A statistical learning system cannot step outside its own predictions to fix such problems.
I wouldn't call the article "bad", but I do think it loses points for furthering a "feud" which just need not exist. Norvig-style AI is incredibly useful in the here and now to solve real-world problems. Hofstadter-style AI is driven by a deep desire to exorcize one of the few remaining "ghosts" in our otherwise natural world. It is an attempt to REMOVE the "ether" of thought stuff, not to hide in it.
Perhaps a terminology change is in order? I would call what people like Norvig and Hinton and Ng do "machine learning", while I would probably use "artificial sentience" to describe the Hofstadter-Dennet camp, as that seems to more closely capture their ultimate goal.
The world needs both approaches, and hopefully the time will come when the gap is bridged.
Love'd this quote "...the trillion-dollar question: Will the approach undergirding AI today—an approach that borrows little from the mind, that’s grounded instead in big data and big engineering—get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand?...AI has become too much like the man who tries to get to the moon by climbing a tree: "One can report steady progress, all the way to the top of the tree.”"
Me too. I wonder what Hofstadter's responses to it is.
Edit: Googling, it seems that Hofstadter's response is along the lines of Haugeland. That by describing the translator as a man, we are improperly being asked to identify with him, when in the actual metaphor, the man is only an implementer in a larger system. The larger system actually does understand Chinese. So the claim is that the Chinese Room thought exercise is actually a fallacy.
Ok so here's an attempt to clear up the feud. As i see it, what Hofstadter wants is an anti-gravity elevator. The "Modern" (aka practical) AI approach is ladders, stairs... and eventually mechanical elevators. Now ofcouse, progress along the "practical" approach will NEVER lead to an anti-gravity elevator, as the fundamental principles are completely different. But they get the job done.
See, that's the point, as incredibly awesome and useful as the anti-gravity elevator might be, mankind can't wait around for someone to invent it, just raise stuff, or travel in the vertical dimension. And hence all our modern AI systems (including google, siri, robots, warehouse management systems, etc etc) are powered by this approach.
So should we scrap stairs an elevators in pursuit of anti-gravity? Certainly not, we NEED them right NOW. But does this mean we should dreaming about, and working towards anti-gravity? HELL NO!! We need that too.
And hence, as much as i LOVE Hofstadter (i have had the same approach to AI ever since i was a kid), i still have a very PROFOUND respect for modern approaches because they help me create some functionally amazing software.
Considering the large and growing bank of research that highlights areas where the brain's output is flawed or plain wrong when compared to the consensus "optimal solution", I think I'm with the "AI establishment" as it's painted by this article. It doesn't seem self-evident to me that the inner workings of the human mind are the only or even optimal implementation of intelligence for every task.
If anything, the human mind seems to me to be a particular algorithm that is flexible, but trades that flexibility for capability in certain problem areas. Using a transportation metaphor, it's like walking versus air travel. Walking is incredibly flexible when it comes to where you can go, but air travel is by far the optimal route to get from coast-to-coast, although you are limited to travelling between airstrips. I feel like focusing on the human brain as the "true" intelligence is like claiming that walking is the only true transportation, instead of focusing on optimal routes for each problem.
Maybe the article doesn't get it across, but the anti-establishment crowd doesn't think that doing things the "human way" is always the best solution. In fact, they acknowledge that it's often, even usually, not. Hofstadter does not want to replace (e.g.) GPS AI with human-style intelligence, or even more ridiculous, replace the AI that flies airplanes with one that gets bored and does crossword puzzles instead of paying attention. Instead he wants to understand human-style intelligence because that opens doors to tacking really interesting scientific/philosophical problems like personality, self, consciousness, autonomy, etc.
In short, the camps have different goals. One is trying to solve problems optimally. The other is trying to understand what intelligence means, biologically. AI as a sub-discipline of Engineering, vs. AI as a sub-discipline of Cognitive Science, perhaps.
> replace the AI that flies airplanes with one that gets bored and does crossword puzzles
This would be a great scenario for a sci-fi novel / movie. An AI that threatens the human race not by achieving sentience and attempting world domination, but by achieving sentience and playing Sudoku instead of doing its job...
I think a big part of the problem with AI is you are trying to map a digital model onto an analog system. There was a story on HN last year, I can't seem to find, that used a genetic algorithm on analog circuits to evolve optimal pattern matching for certain images. The results were good but when they went to build another one it didn't work right because of unmeasured EM feedback and subtle differences between individual circuits meaning every circuit would have to run its own evolution, negating most of the usefulness of the project. Maybe an analog model would be more appropriate.
When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
I cannot recommend "Creative Analogies" more. I have purchased no less than four copies (two for myself; two for others, including K. Barry Sharpless, who once made a remark about AI that was reminiscent of some of the ideas in CA) over the years. It's even better than "Surfaces".
I'm not convinced that Hofstadter is pursuing computers that think like humans, so much as computers that appear to think like humans. He abstracts certain observable behaviors of the human mind (e.g. analogy), but there's no guarantee that what a brain can observe about itself is what a brain is actually doing. Does it make sense to ignore the underlying behavior of human brains, and instead try to directly emulate a particular abstraction? We can't let our romantic notions of what brains "do" get in the way.
I agree with your point, but the solution isn't to instead model the underlying biological nuts and bolts (assuming this is what you mean by "underlying behavior of human brains"). Even if those approaches approximated human behavior (which they don't), it wouldn't be all that informative. Just as creating "artificial" intelligence by growing a brain in a petri dish wouldn't be informative either. (Well, it wouldn't be informative as to what the brain's job description is, although we could presumably learn plenty from of other things from the exercise).
The better solution is to come up with creative experiments (with human experiments) that can arbitrate between competing implementations. With modern technologies like virtual reality, you can create almost any counterfactual situation and that to decide between theories that make the same predictions "in the real world".
Well you might say Wundt was the father of structuralism and James the father of functionalism - a deep divide which is still felt today. I suppose this shows the author's bias, implicit or not.
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.