I guess that was the case, it all looks unflagged now. I'm really impressed with your work, one question, though. What do you believe are the biggest bottlenecks to speeding up your work? How soon could we see this applied to say, millimeter sized brains?
The Allen institute (see e.g. https://twitter.com/danbumbarger?lang=de) and also Jeff Lichtman (https://lichtmanlab.fas.harvard.edu/) are very close to having solved the data acquisition problem (using very fast TEMs or multi-beam SEM) for cubic mm sized volumes, and we (http://www.neuro.mpg.de/denk) are also working hard on it. On the analysis side (i.e. automatic reconstruction), I am actually optimistic that it is mainly a software engineering problem (scalability to Petabyte-sized volumes, use tailored machine learning for remaining problems, e.g. to identify reconstructions that make no sense) and not so much a fundamental algorithmic limitation problem anymore. So 2 years from now, we should see the first cubic mm reconstructions.
Just today they came out with the full image set of a fruit fly brain (http://temca2data.org/). Is the output of this something that could be fed through you algorithm, and if so, how long would that take?