Hacker News new | past | comments | ask | show | jobs | submit login

There are more divisions than the article can get into, of course. I think the main one is between AI to solve engineering problems and AI to better understand the mind. The article is about doing the latter when everyone else's definition of AI is the former.

What you're talking about, I think, is various approaches within the latter group of researchers. I defended Hofstadter in another reply because I find his goals worthwhile in and of themselves - in a "basic research" sense. Discarding anything that's not an optimal solution - the attitude taken by a couple of responses here - ignores a whole lot of interesting science, and as a scientist it bothers me quite a bit.

That said, once we're talking about the goal of understanding the human mind, GOFAI is, to be sure, incredibly old-fashioned, and you won't find me defending Hofstadter anymore as far as his approach. His goal is a worthwhile one, but you're right that his approach shouldn't really be considered an 'underdog'.

Personally, I think the best hope for understanding what intelligence is, in a general sense, comes from non-equilibrium thermodynamics, as in the sort of research goin on here: http://www.tandfonline.com/toc/heco20/24/1#.UmlRN_mfihM

but that's a can of worms for another post.

As an aside, regarding your last comment, I completely disagree on your view of philsoophy. It may not have produced results but it has guided science. But I agree with the thrust of your sentiment: an AI researcher with the goal of understanding the human mind should be spending as much time studying humans (as in, doing Psych or Cognitive Science experiments) as programming AIs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: