It's fascinating stuff going on there at Numenta.
Thanks for submitting this!
People have had huge expectations for so many decades in AI Research that it led to a bad image. But something with that is wrong, because every step towards understanding the brain brought all of us a step forward.
We know Nature is brilliant, so it's absolutely logical that it can't get copied by some funky scientists over the weekend. Could've been the great AI depression, but we'll get over it. I think the entire disappointment spiral led the AI Community to get treated like "wadda wadda wadda".
To me it appears like when a prehistoric human finds laser technology from a crashed UFO and doesn't know what to do with it. Making him disappointed about laser technology. So before whining about how slow we find things out about ourselves, the human brain and the mind we should better invest more in education. The more people can research this topic the faster we get amazing results.
When the research in a topic doesn't yield a promising result it doesn't mean the researchers are incompetent, it means we just don't have enough education or researchers who can complete the puzzle.
Inventing/Discovering Strong AI will be one of the greatest achievements of all time in my opinion. It will change our world in ways unimaginable. Our standard of living will go up for everybody, even the poorest nations. I just hope that we don't manage to kill ourselves with that technology.
How could we kill ourselves with such technology? Simple, the government will probably weaponize AI, build robots with it. An agent somewhere will issue a command to "eliminate such and such", the robot will understand it as eliminate humanity because of a bug in its system. Due to bad luck the safeguards that are to prevent this also fail. i.e. This would be like a cascading failure that brings down airplanes.
Upon receiving its orders the first thing the robot will do is that it goes into hiding, it replicates itself as much as possible while remaining undetected. Several decades letters all of its descendants detonate all of the world's nuclear arsenal along with the new ones they've built. Mission accomplished. (I think this is the plot for my first SciFi book!)
I was first exposed to HTMs through Jeff Hawkins' book "On Intelligence". I found the explanatory power behind HTMs comparable to what the theory of evolution could do. That got me both excited and cautious ... or perhaps "cautiously optimistic" is the right phrase. Glad to see Numenta making strides here in a way that keeps them anchored in today's problems, but the announcement page reads like a business "story" and I imagine some Numenta engineers groaning behind the scenes :)
Alan Kay said that he would be the "first and the loudest to applaud" when HTMs show significant results. It doesnt mean as much but I would certainly compete with him for that position :)
AI, well naturally, is a very hard problem and imho will require powerful and original thinking. So it is fabulous to have someone pushing us (well, brain scientists in this case) to see the forest for the trees.
Last year Dr. Davide Maltoni published a paper (Pattern Recognition by Hierarchical Temporal Memory)[1] comparing the algorithms laid out in Dileep Georges original PHD thesis. The results were pretty impressive. There have been many improvements to the algorithm since then too.
Read the paper; HTM's don't seem to do better than other object recognition algorithms at recognizing shapes, especially because there are visual properties it ignores (curvature, global topological properties, etc.) The accuracy for the picture datasets are only 60-70%. What's interesting about HTM is its generality. I can't judge whether it would be good for the Grok prediction engine, but I know more about image recognition and you definitely don't want to use it for that.
Jeff Hawkins who also founded Palm is a cofounder of Numenta. He started the company after writing 'On Intelligence'. He does some great lectures that are on youtube.
This seems really cool, I signed up for the beta. I hope I am selected.
So some ML/AI researchers I know consider Numenta to be of a somewhat "outsider artist" effort. Their opinions of the research from Numenta is that of lower regard.
Now, you don't have to upvote me for this, but you don't have to downvote me, either, because I happen to know experts in ML/AI who disagree with Jeff Hawkins. Got that? Downvoting me doesn't change these other people's opinion. I'm just relating to you what I've been told by others more qualified and knowledgeable in an area than I am not an expert in. I have to say all this because people seem to give Numenta a very dogmatic approval and anyone who seems to question it on discussion boards gets flamed in a religious manner.
Edit: Hey, look, a downvote.
So, is Numenta genuinely really novel and new or is it snake oil? Or is it somewhere in between? Does it actually work better than what we have now?
I'd love to read a substantial critique of Hawkins.
But I'm downvoting you for a post with nothing but hearsay, appeals to authority and arguing with your downvotes...
And no, I don't care what your important friends think of Hawkins. I'd care if they wrote something substantial I could read but otherwise, hey, get off my lawn...
All I did was ask -- ASK -- whether if Numenta actually works.
So does it work? Better than what we have? Is it really novel?
ps: You're failing to make a prime distinction about argument from authority and falacious argument from authority. You can safely argue from authority by someone if is a genuine expert and a consensus of experts all are saying the same thing. An expert would be if most of what an authority says on a topic is often virtually all the time, then you could be OK with that person being an expert. The Southern Baptist can argue from authority about biblical interpretations given that they should know Greek/Hebrew/Latin, ancient mediterranean history, etc. They can't argue from authority about evolutionary biology if they don't know anything about it.
The reason I posted here is because I don't talk to my ML/AI friends often, I don't have many of them, and they haven't explained why they think what they do.
I don't understand why it is taking so much effort to get a simple amount of proof. Surely, there's a corpus of data with performance metrics that exists and can conclusively demonstrate one way or another?
I don't think "snake oil" is the right paradigm here. In ML/AI, lots of honest researchers are wrong; being a scientist who's wrong doesn't make you a criminal.
That said: Hawkins' principles are very different both from what the brain does, and from what the state of the art in machine learning does. My impression is that HTM's attempt to be too general and assume too little about the problem.
For vision in particular, most successful computer vision algorithms (as well as what we know about the visual cortex's mechanisms) make extensive use of information related to the fact that the image is an image. That is: edges are probably more likely to be continuous than broken; locally constant curvature is more likely than not; textures and colors usually continue over the surface of an object; objects occlude other objects; etc. Brains and effective computer vision algorithms hard-code a lot of information about the nature of the problem they're solving. Hawkins wants to bypass that, and I think it's probably too ambitious an aspiration.
Then again, if he makes it, more power to him.
I don't think we should be prejudiced against someone who comes from the tech industry and wrote a popular book. It's certainly not "snake oil" -- it seems to be a good-faith attempt to solve an important problem. I think the odds are against it working, but that's not a moral condemnation.
Hawkins' reasoning struck me as very misguided. I'm not an expert in the field and I'm not up to speed on what Numenta's current form is, but his concept of creating half an intelligence has always offended my sensibilities.
I wanted to say something useful here but I don't know enough about what Numenta's actually doing. I know much of what is not being done, and it basically amounts to willful ignorance in my opinion.
Edit: let me revise this, since I'm being voted down with prejudice anyway. Hawkins has stated very clearly that the only valuable part of the brain to ML/AI research is the neocortex. He's plainly wrong, it should be easy enough to understand this is the final touch on the evolution of our brains—the least important in a survival sense. More directly, without a motivating force there is no opportunity for an intelligent system.
My gut reaction is that the whole thing is a fraud, but I have up to now refrained from using such terms because I'd rather not argue from a position of ignorance (there are things I don't know about Numenta.) On the other hand, it's obvious this subject brings about the ignorant, so I'd like to take the opportunity to address these people: you are deluding yourselves, just because you want to believe something doesn't make it true. (That goes double for you Jeff.)
I don't think Hawkins is necessarily saying the neocortex is the only part of the brain that's interesting to understand.
I think the main argument is that there is reason to think that there is a single "neo-cortical algorithm", that it is somewhat simple compared to the massive complexity of the brain as a whole and that if you can understand this algorithm, there will be massive payoffs.
That argument too might be misguided but it seem like a pretty focused idea for going from brain to artificial intelligence (a problem where one would expect that simplifying approaching would be quite necessary).
I'm more doubtful of his idea that the neocortex is primarily about prediction. I suspect that's only one aspect.
> ... people seem to give Numenta a very dogmatic approval and anyone who seems to question it on discussion boards gets flamed in a religious manner.
I wonder why that is? Is it that Numenta is doing good nerd-marketing. It appeals well to the technical crowd (not AI experts but to people who love technology / programming ). I think among that crowd (and I am one of them) there is the hope that maybe the AI revolution is finally here. This one approach will take us "over the edge" into some kind of singularity. Over the years there have always been what seemed to be very promising "revolutionary" approaches that always made it seem a machine passing the Turing Test is "just around the corner". Wanting to have this hope and wanting to dream is what keeps these kind of stories going.
I am no AI expert but just judging from what I know and if I had to make a guess I would say this is mostly marketing. It seems to me AI is not a field where an "outsider" comes, gives TED talk, writes a book and now we have a revolution. In a way, to me this smells like http://en.wikipedia.org/wiki/A_New_Kind_of_Science type situation. Maybe I am just jaded and skeptical and good thing I am not involved in any such research (I would have given up and nothing would have been discovered) so I am glad people are researching and keep working on the problem, I just don't think this is "it" yet.
What you have to factor in here is that people who consider themselves experts will often take a dim view of something new they don't understand, particularly if the work was done outside the system they're part of.
(A quote comes to mind: "The bomb will never go off, and I speak as an expert in explosives." -- Admiral William Leahy to Harry Truman, concerning the atomic bomb.)
Numenta's technology is definitely novel. How well it will work remains to be seen, but I think it's a very interesting experiment.
It uses a different algorithm. Google Predict likely uses a battery of several types of supervised learning algorithms and lets them "vote". Grok uses their Hierarchical Temporal Memory algorithm.
So you say Google is using a battery of hidden markov chains with classical or slightly novel algorithms?
I could believe that, given the quality of the Google Predict results. Plus Google buying http://recordedfuture.com undermines that their Prediction algo's aren't top notch (mildly expressed).
It seems like HTMs would be prone to the same over-fitting problems that other types of neural networks have. I tried to find some comments in the Numenta material about this, but didn't see anything about Grok's strategy to avoid over-fitting. Can anyone help me figure this out?
I breezed through the PDF on the HTM Cortical Learning Algorithms and there's nothing new since the 80s -- over-fitting is still a problem. Perhaps there's an implicit assumption that because learning is online, over-fitting doesn't happen or isn't important.
If you're interested in learning more, Jeff Hawkins will be doing a keynote this year at Strange Loop talking about it in more depth. http://thestrangeloop.com/sessions
People have had huge expectations for so many decades in AI Research that it led to a bad image. But something with that is wrong, because every step towards understanding the brain brought all of us a step forward.
We know Nature is brilliant, so it's absolutely logical that it can't get copied by some funky scientists over the weekend. Could've been the great AI depression, but we'll get over it. I think the entire disappointment spiral led the AI Community to get treated like "wadda wadda wadda".
To me it appears like when a prehistoric human finds laser technology from a crashed UFO and doesn't know what to do with it. Making him disappointed about laser technology. So before whining about how slow we find things out about ourselves, the human brain and the mind we should better invest more in education. The more people can research this topic the faster we get amazing results.
When the research in a topic doesn't yield a promising result it doesn't mean the researchers are incompetent, it means we just don't have enough education or researchers who can complete the puzzle.
Before consuming popular information: http://en.wikipedia.org/wiki/Harold_Lasswell
At the time the Numenta stuff got popular, the critics also got a voice: http://www.acceleratingfuture.com/michael/blog/2010/04/ben-g... and http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does...
PS: enjoy the earworm: http://www.musick8.com/mclips/34wadda.mp3