Inspired by biology is typically a better way to think about it. Airplanes have wings inspired from biological birds, and they share some structural similarities, but in practice they serve very different functions.
I would even say that this is somewhat revisionist history. From my perspective, this all started from an attempt by Kolmogorov to solve Hilbert's 13th problem:
Kolmogorov authored a paper titled "On Representation of Continuous Functions of Several Variables by Superpositions of Continuous Functions of Smaller Number of Variables," that basically solved this in 1961. This led to a nice back and forth series of papers between Kolmogorov and Arnold, but the one that becomes more important is Kolmogorov's paper, "On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition," in 1963. What this paper proves is that any continuous function defined on the n-dimensional unit cube can be represented by the superposition of 2n one dimensional continuous functions:
Now, the problem with this theorem is that it doesn't say how to find these magical 2n functions. However, in 1989 Cybbenko published the paper, "Approximation by Superpositions of Sigmoidal Functions," which both extends and weakens the above result. Basically, he loses the 2n bound, but gives a way to construct these functions by using a linear projection inside of a superposition of sigmoids. This led to the universal approximation theorem:
and I would contend the underpinnings for the modern neural net models. Now, is there any biology in there? No. It's a long series of function approximation papers. That said, I don't know the authors involved or what inspired them to write these papers. However, given that we have a documented history of dry function approximation papers that give us the mathematical power that we need to begin to justify these models, I tend to feel that the biological connections are oversold.
That timeline seems to miss that Yann LeCun was already working on ConvNets in 1988. I don't think anyone waited for the Universal Approximation theorem to start building neural architectures, it was just a tangentially interesting mathematical result.
Which paper are you speaking about? Certainly, I'm always interested in a more complete history. I'm currently on LeCun's page and can't figure out which paper you're speaking to:
More generally, a common trope in NN papers and books is to draw a graph for matrix-vector multiplication and then draw the analogy that these are like neurons in the brain and this represents their connectivity. This is an example of the kind of backwalking biological analogies that frustrate me. Again, certainly, I don't know the motivations behind everyone in the field, but I do contend that many of the more powerful theorems have nothing to do with biology and have other origins.