Hacker News new | past | comments | ask | show | jobs | submit login

This is probably the most important research direction in modern neural network research.

Neural networks are great at pattern recognition. Things like LSTMs allow pattern recognition through time, so they can develop "memories". This is useful in things like understanding text (the meaning of one word often depends on the previous few words).

But how can a neural network know "facts"?

Humans have things like books, or the ability to ask others for things they don't know. How would we build something analogous to that for neural network-powered "AIs"?

There's been a strand of research mostly coming out of Jason Weston's Memory Networks research[1]. This extends on that by using a new form of memory, and shows how it can perform at some pretty difficult tasks. These included graph tasks like London underground traversal.

One good quote showing how well it works:

In this case, the best LSTM network we found in an extensive hyper-parameter search failed to complete the first level of its training curriculum of even the easiest task (traversal), reaching an average of only 37% accuracy after almost two million training examples; DNCs reached an average of 98.8% accuracy on the final lesson of the same curriculum after around one million training examples.

[1] https://arxiv.org/pdf/1410.3916v11.pdf




Thank you, I think I understand this now. So now we can train a model that doesn't have to learn everything from its weights alone.

Would this be an apt metaphor: LSTM's were like a student who had to know how to take a test and memorize how to do the problems - a DNC can learn how to take the test but it can look at its notes.


If it sucseeds and scales, it seems very close to AGI, right ?


No-where near it. So far away that it is almost completely nonsensical to talk about it.

I guess it is unlikely that one could have an AGI without some kind of memory, so there is that.


What further key skills will AGI need ?


In general an AGI would be based on a reinforcement learning framework. Its main skill would be to observe the world, judge the situation and perform actions. These three processes are run in a continuous loop. It would receive a reward signal by which it would learn behavior. It would have to be embedded in a world where it can move about and act upon. If it has all these ingredients, it can become a general intelligence, as long as the reward signal is leading it to do that.

Memorizing is just one of the actions such an agent is able to perform. Another mental action besides memory would be attention. It would also need to be able to simulate the world, people and systems it is interacting with (to know how they behave) in order to be able to do reasoning and planning.

In short, an AGI would need: sensing (deep neural nets for vision, audio and other modalities), attention, memory, estimating the desirability and effects of various actions (a kind of imagination), an extensive database of common known facts, and the ability to act (for example by speech and movement).

Many of these systems have been demonstrated. Sensing, attention and memory are common place in ML papers. Creativity is demonstrated in generative models that can write text, music and paint. Ability to predict the future and reason about it was demonstrated in AlphaGo. Speech and motor control are under development. We have most of the necessary blocks, but nobody has put them together to form a functioning general AI yet.


That depends on a functional definition of AGI.

My preferred one is "An AGI is one which knows which are sensible questions to ask".

That's because it seems to me that most "AI-lite"-type goals are procedural. AGI needs to have agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: