That’s what I thought-sounds much like the great stuff coming out of Chris Buckley‘s group.
The authors even refer to that work in the abstract. The “breakthrough“ seems to consist of showing that the method produces the same results as vanilla backprop.
Unfortunately that is what is often recommended to win grants and get considered for the best journals. But yes, it should draw scepticism. But then again, scepticism is anyway part of the game in science, always.
Predictive coding is based on a theory for brain operation that grounds in the ubiquity of recurrent connection between functional brain regions, where the forward pass transmits the prediction, and the backward pass transmits the error. Unlike backprop, this theory uses only local information, and is therefore somewhat more compatible with how the brain works.
There is a huge body of work about this. The current paper seems to show that the result of predictive coding is equivalent to the result one gets with backprop.
I think it says more about predictive coding as a way to train neural networks rather than biological systems.
There's been an argument that schemes relying on backpropagation can't provide insight into biological neural systems, but this argument is weakened by the existence of predictive coding equivalents.