Hacker News new | past | comments | ask | show | jobs | submit login

Testing analytical methods of a field against engineered artefacts is a good idea but there is a fatal flaw here; devices that do a fetch-decode-execute-retire loop against a register file and a memory bus have perversely little in common with what neurobiology is concerned with. A more appropriate artefact would be a CPU and its memory (where NOP'ing out code or flipping flags corresponds to "lesioning"), or even better, an FPGA design (where different functions work in parallel in different locations on the silicon, much like brains).

That the tools of neuroscience choke on a 6502 is as much of an indictment of the former as my inability to fly helicopters is an indictment of my fixed-wing airmanship; not coping well with notoriously perverse edge cases outside your domain of expertise isn't inherently a sign of failure (it's not a licence to stop improving, of course). Brains and 6502s are quite literally entirely different kinds of computing, much like designing for FPGA is weird and different from writing x86 assembly or C.

A far more interesting question is "could a neuroscientist understand an FPGA?".




>devices that do a fetch-decode-execute-retire loop against a register file and a memory bus have perversely little in common with what neurobiology is concerned with.

A key point of the article is that we can't really be sure this is the case, since the analytical tools used by neuroscience arguably wouldn't reveal this kind of structure even if it did exist.


The nice part of evolution is that most of the times you can see some leftover of the intermediate steps. For example, for the eye you can find animals with different complexity of eyes from a simple flat photosensitive area to a full eye of vertebrates (or cephalopods, that have a different but similar eye design).

So in most case you can get some people to specialize and understand the simple models and create concepts and tools to understand the more complex models.

In electronic circles you still have floating around a lot of mini-integrated circuits with 20-50 transistors that are easy to understand. And you can learn to group individual transistors in mall groups that do something useful (for example limit the output current, simulate a resistors, amplification, ...)

Then you can learn to decode the intermediate models with 100-1000 transistors, and then the models with a few thousand of transistors, and then ...

So, it's very suspicious that there are no animals with a minibrain with is finite automata of 3 states.

There are also some cases where all the intermediate steps dissapared, for example the transition form prokaryote(bacteria) to eukaryote (animal, plants, protozoa, ...) And IIRC nobody understand the intermediate steps. But there are some clues, many structures are shared between prokaryotes and eukaryotes, mitochondria are probably trapped bacteria (they have their own DNA and too many external membranes, ...)


>So, it's very suspicious that there are no animals with a minibrain with is finite automata of 3 states.

What is your basis for saying that no such animals exist? Exactly how many states there are depends a great deal on the level of analysis. How many states does a 6502 have? At the physical level, an enormously large (possibly even infinite) number. At the level of analysis appropriate for programming one, considerably fewer.


We do know this already. Our brain is not a pure blackbox.

It is also a counterargument for the paper itself: If you know so little about two different systems, than you can't take any tool from system 1 and use it on system 2 and expect any resemblence or transfer of results.


We don't know that there aren't parts of the brain that use a fetch decode execute loop. The paper points out that although the 6502 does in fact work that way, this is far from obvious when examining it at the transistor level.


Not knowing something doesn't mean we don't know anything.

Yes we are not 100% sure that there is somewhere a fetch decode execute loop but right know with the knowledge we have, it is unrealistic.


What makes it unrealistic?


Because we do know how neural network works and our brain is a huge one.

Why would you wan't something like this to be in our brains anyway? Its already a problem to have data an processing divided.


We don't know how the neural networks in our brains work at anything like the level of detail required to draw the conclusion you're suggesting. What I want is obviously irrelevant.


But this inquiry didn't seem to care about whether the specifics of computer hardware map onto biology or vice versa to any interesting degree. They care about whether the computer is a dynamic system of such complexity that it's resistant to current causal analysis.


They took the system, excluded the hardware where the all the behavioural features they could directly observe resided (RAM, ROM and its contents) from their analysis, and then concluded that their neurobiological-inspired technique was useless for understanding complex systems because it didn't tell them anything useful. If they hadn't done that, they'd have been in serious danger of gaining actual understanding and wouldn't have been able to use the exercise to mock the neuroscience practices they dislike.


Or, much worse but likely more accurate, an FPGA shaped by an evolutive algorithm https://www.damninteresting.com/on-the-origin-of-circuits/

Because that's another issue, evolution is a pretty greedy algorithm and nature doesn't care if you don't understand her architecture decisions (metaphorically speaking ofc)


Er, that's a way in which the microchip problem is easier, not harder


That's my favorite horror story to tell the "autonomous driving is ready for deployment" fanatics (not that it helps).


One thing that came to mind is that computers are notoriously brittle compared to a brain. If you damage a single transistor in the processor you can bring down the whole thing. Not so sensitive with components in memory. We know the brain can take some damage and still function quite well, but are there single neurons or tiny groups of them that will basically bring down the system if damaged?

I also think the brain has important features more detailed in function than a 6502 so the applicability of the methods isn't necessarily invalid. For example, we don't know why neurons can transfer mitochondria between them, we don't know what is encoded in DNA, we don't even know how instinctive behaviors are encoded (or perhaps I just don't know).

As someone who once reverse engineered a processor from the gates up, I honestly don't know how one could do it top-down. The details at the bottom are critical, but also not critical. Decoding the microcode was critical, but determining precise timing and some other things was not needed to write a fairly functional emulator.


> We know the brain can take some damage and still function quite well, but are there single neurons or tiny groups of them that will basically bring down the system if damaged?

I like this question. I'm not a neuroscientist though, but I think most neurons come in fibers that contain several of them. So even if a certain connection was critical, a single one dying will still leave a functional fiber. Because neurons are plastic (to a limited but significant extent), any effect from that single neuron can be partially or completely compensated.


Well, multicore CPUs are generally manufactured with a higher core count than they are sold as. Automated testing detects defects and bins individual wafers based on the maximum clock frequency at which they run without errors. So in some sense, CPUs are robust to some flaws in the lithography process by virtue of disabling broken cores/cache banks. Of course, this isn't done dynamically like a brain.


An FPGA compared to an x86 processor or C program is many orders of magnitude more closely related than either of those to a brain, especially when you start looking at the brain as the result of the decoding of a string of DNA in what once was a single celled organism.

The orders of complexity involved alone dwarf any other comparison you might want to make and that's before we get into the effects of nurture, environment, interaction, education and so on.


Why do you think it is a good idea to use analytical methods of one field for another field when you mention the fatal flaw in your next sentence?

I have really issues even accepting that anyone would even try something like this and get it published. Also the conclusion of this paper is broken.


A 6502 isn't just combinatorial logic. Nearly half the chip is memory, either decode/sequencing ROM or latches.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: