I'm probably totally off base here (neural networks/AI is not my wheelhouse), but is having "memory" in neural networks a new thing? Isn't this just a different application of a more typical 'feedback loop' in the network?
You're correct in a way, you can think of neural nets "remembering" the data set they're trained on. Recurrent neural nets even explicitly have a "feedback loop" like you're referring to that allows them to "remember" previous samples. An example of that is in natural language processing where you want to be able to remember the previous words in a sentence to interpret the current word.
Remembering the previous words in a sentence you're currently reading is more like short term memory though, and this paper is talking about long term memories stored as data structures outside of the neural net itself. This graphic from the DeepMind blog post might be helpful: https://i.imgur.com/KwXXCge.png.
The "memory" in a typical recurrent neural network is akin to a human's short term working memory. It only holds a few things and forgets old things quickly as new things come in. This new memory can hold a large number of things and stores them for an unlimited amount of time, more like a human's long term memory or a computer's RAM.
An earlier form of DNC, the neural Turing machine16, had a
similar structure, but more limited memory access methods
(see Methods for further discussion).