For those of you who would like to learn more about the field, I highly recommend this 3 part lecture by Jeff Lichtman, one of the leaders in this field
Part 3 is pretty mindblowing, it talks about the imagine technique Google discusses here and about the machinery and microscopes that have to be developed to image a brain and the enormous challenge it posses. https://www.youtube.com/watch?v=2QVy0n_rdBI&t=10s
This post introduced me to the field of connectomics. I must admit I am extremely fascinated by this field and the technology that characterizes it. Can anyone suggest me a few introductory resources, such as books and introductory papers, that they consider reliable?
As well as the MIT Seung Lab's Eyewire Game. It's a kind of citizen science interface. Observing images of the neural structures behind the human eyeball. You can assist in untangling the dense jumble of wiring that leads to image processing in the brain ;)
Yes, this is an important advance in the field. Google's been working on this for the past few years after convincing Viren Jain (https://www.janelia.org/our-research/former-labs/jain-lab) to leave Janelia and perfect this technology. Very cool stuff! Even as early as two years ago, it generally took a grad student months to years of work to manually reconstruct 50-100 neurons (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4844839/); now this same process can be done in virtually no time at all. Expect to see several more papers in the future involving reconstructions of thousands to tens of thousands of neurons, instead of the hundreds we've been seeing. Exciting times!
Thanks for the response! I think the link you provided includes the ");" at the end, leading to an error page. Removing ");" leads to the article I think you wanted to mention. Anyway am I correct in saying this may allow research to analyze bigger and bigger neural networks (the biological ones)? I remember OpenWorm (http://openworm.org/science.html) which was able to recreate virtually the nematode's brain thanks to a few "maps" of its brain. Could this technology (coupled with the improvements that will come in the next years) allow something of the OpenWorm kind with more and more complex organisms?
To really extract networks, you would need to either image synapses (very, very difficult since they are sub-micron sizes) to determine how cells connect, or image something like calcium potential to infer a circuit of active neurons as a very sparse subset of a larger set of observed cells during a particular kind of neural activity.
One of the authors here; Imaging synapses is really no problem with electron microscopy resolution. In fact, you can see automatically extracted synapses (little spheres) in the video of the blog post, this was done a year earlier already!
Oops, sorry, I have been talking to folks doing en vivo fluorescent imaging and didn't notice that this was electron microscopy!
Not only is the resolution a lot worse for optics, but I understand that the signal to noise ratio is much lower when avoiding destructive imaging methods.
The paper (available at https://sci-hub.tw/https://www.nature.com/articles/s41592-01...) says that the part of the zebra finch brain analyzed had a resolution of 9 x 9 x 20 nanometers. If I'm reading it correctly, should they not be able to analyze synapses, too? At that resolution, shouldn't synapses have been imaged too?
Moore's Law is merely a side effect of a larger law of accelerated return. You can see it with solar power, CPU's, disc storage, etc.
Every aspect of human technology is growing exponentially and the rate of growth itself is growing exponentially.
Population, overall education level of people, scientific inter-connectivity, the mere fact that entire new scientific disciplines can appear overnight pretty much indicate that we will face enormous advancement everywhere.
For a semester project in grad school, I once interviewed Ken Hayworth while he was at the Lichtman Lab at Harvard working on FIB-SEM tissue slicing and imaging technology to create extremely high-res imagery of slices of mouse brains.
The most interesting idea he discussed with me was the idea that getting to the point of pragmatic whole-brain imaging for purposes of connectomics-like neural reconstruction (and perhaps actual brain emulation) looked like it would be so hard that he expected there would be a funding model similar to the way astronomy labs handle expensive telescope time.
With telescope time, it's not economical for any specific lab to totally own the entire observatory, as the equipment is really expensive to create and maintain, and any given project may only utilize the equipment for a tiny fraction of the time. So instead, you get a model similar to cloud computing: some consortium will build and operate the infrastructure and different labs would bid on actual scope time to dedicate some devices for their specific research needs. The more urgent or promising the research project is, the more they might be willing to pay to get priority scope time, and this would drive what types of astronomical discoveries are made.
With connectomics / neural reconstruction, it could be similar. Someone might propose a certain section of the brain to map out because of a promising connection to a certain disease or development in cognitive science or an understanding of behavioral patterns. And over time we would get some piecemeal, patchwork "planetarium" map of an imaged brain where we have highly resolved detail about some regions, but have almost not information about others.
Incidentally, the project for which I conducted this interview was a semester project to try to pin down tight estimates on how long it would take using known technology to fully image a whole brain. There are several different physical techniques, but the FIB-SEM and ultramicrotome stuff was by far the most promising and most efficient.
To give some idea of the time scale involved in imaging a human brain, putting aside data storage, retrieval costs and the setup and preprocessing time to prepare the tissue into small enough column slides to be operated on by the particular device (FIB-SEM), a single 20 by 20 by 20 micron tissue cube can be imaged into 5 by 5 by 10 nm voxels in roughly 2 hours, assuming a 10 MHz optimized FIB-SEM device (which is a reasonable extrapolation from current technology).
It would take 30 years for one such FIB-SEM device to image a single cubic millimeter volume of tissue. The human brain is roughly 10^6 mm^3 in volume (it's between 10^5 and 10^6, just using an upper bound here), and so even if we parallelized a set of 100 such FIB-SEM devices and set them running continuously for 10 years, we would only have imaged 0.003% of the human brain at this resolution!
I'm sure other technological advances can speed this up, but it is really hard to predict by how much.
In my thinking, this is one of the main remaining reasons why someone might feel that strong AI is unlikely to be developed via first emulating human brains in software (e.g. like Robin Hanson's primary argument), compared with being developed through algorithmic research in machine learning and AI (the stuff that Friendly AI researchers are often more directly concerned about).
The Allen Institute for Brain Science is in the process of imaging 1 cubic mm of mouse visual cortex using TEM at a resolution of 4nm per pixel. The goal is to complete this in about 4 months running in parallel on 5 scopes.
https://www.youtube.com/watch?v=LO8xCLBv6j0&t=70s
That’s very cool. It looks like this was enabled by extremely recent advances in the FIB-SEM devices [0].
I’d say 20 total device-months is a bit optimistic, but maybe they will hit it, and even if they are anywhere close it will be impressive.
20 device-months for 1 mm^3 compared with 360 device-months with the devices I studied in 2012-14 is impressive. I hope they do it!
FWIW, my belief is that this line of research is probably more promising for strong AI than straight development of AI from e.g. meta-reinforcement learning, although in the end it probably will be a mix of these things.
Just following-up on this, it's still staggering how long it would take to image the entire 10^6 cubic mm of the whole brain. If it took 20 device-months to image 1 cubic mm, it means we would need 20 million device-months to image the whole brain.
With 10,000 devices running in parallel, and assuming no failure rate (though with a device count this high, failures would happen constantly), that would still require 2000 months (or about 167 years) to image a whole brain.
Let's imagine the technology can undergo some type of Moore's Law (I don't know enough about the underlying SEM physics to know whether there is a clear ceiling on speedups achievable) and the time to image 1 cubic mm halves every two years. This might predict that in ~20 years, we could image a whole brain in about 4 months time (still requiring 10,000 parallel devices).
If you keep going past 20 years and continue the trend, but assume you would halve the device count but keep the 4-month timeline the same, then it would be about 42 years before a set of 5 devices could image the whole brain in 4 months time, or going on ~50 years until 5 devices could do it in less than one month, ~60 years before one device can do it in less than one month. Obviously, hugely gigantic error bars around such estimates.
I guess that was the case, it all looks unflagged now. I'm really impressed with your work, one question, though. What do you believe are the biggest bottlenecks to speeding up your work? How soon could we see this applied to say, millimeter sized brains?
The Allen institute (see e.g. https://twitter.com/danbumbarger?lang=de) and also Jeff Lichtman (https://lichtmanlab.fas.harvard.edu/) are very close to having solved the data acquisition problem (using very fast TEMs or multi-beam SEM) for cubic mm sized volumes, and we (http://www.neuro.mpg.de/denk) are also working hard on it. On the analysis side (i.e. automatic reconstruction), I am actually optimistic that it is mainly a software engineering problem (scalability to Petabyte-sized volumes, use tailored machine learning for remaining problems, e.g. to identify reconstructions that make no sense) and not so much a fundamental algorithmic limitation problem anymore. So 2 years from now, we should see the first cubic mm reconstructions.
Just today they came out with the full image set of a fruit fly brain (http://temca2data.org/). Is the output of this something that could be fed through you algorithm, and if so, how long would that take?
Wow, I knew connectomics was starting from behind, but I didn't realize the state-of-the-art was so bad they can't even reliably find neurons in pictures of neurons.
It's rather a squiggly mess of lines... I can understand why it's hard!
Personally, I think we need even higher resolution imaging. We need to be able to do 5nm thick layers.
Obviously at that kind of resolution, we won't be able to store all the output data, so we're going to need to run this kind of segmentation and connector analysis in real-time as the data is generated.
Part 1 is about the history of brain imaging and general info about how the brain is wired: https://www.youtube.com/watch?v=MtTOg0mzRJc
Part 2 is about how the brain is connected to muscles through the central nervous system: https://www.youtube.com/watch?v=r1qwQ3Qrzhs
Part 3 is pretty mindblowing, it talks about the imagine technique Google discusses here and about the machinery and microscopes that have to be developed to image a brain and the enormous challenge it posses. https://www.youtube.com/watch?v=2QVy0n_rdBI&t=10s