Hacker News new | past | comments | ask | show | jobs | submit login
Brain Simulation Promised a Decade Ago Hasn't Succeeded (theatlantic.com)
161 points by Anon84 on July 25, 2019 | hide | past | favorite | 112 comments



I work on parts of this project. No one outside of Markrams small close circle thought HBP would do what he promised, but it was rather seen as a way to fund ambitious computational neuroscience projects otherwise unimaginable. For example, Neuromorphic chips have been ever present within HBP. We are building new multiscale simulation tools to tackle neuro degenerative diseases. The list goes on. Simulating a human brain itself was just the tag line, technically already accomplished by eg Izhikevich, TheVirtualBrain project, Eliasmith's SPAUN among others.

edit despite working within the project I definitely think a critical view of such large funding is essential, so articles like this are welcome


Simulating a human brain itself was just the tag line, technically already accomplished by eg Izhikevich, TheVirtualBrain project, Eliasmith's SPAUN among others.

No. No one has simulated a human brain. All those projects have simulated something that might vaguely resemble some of the brain's structures or operations, but we know so little about the whole thing that it was pretty much a pointless exercise.

HBP has been different from all of them because they attempted to do the true bottom up approach, not feasible without massive resources. As the article states, it turned out that even given the massive resources the task is just too difficult currently.


"every model is wrong, some are useful", models of whole brain exist and reproduce aspects of experimental data.

My day job is building and evaluating such models; we have one on seizure propagation that is entering clinical trial.


Would you call a thermal model of a brain (3d heatmap) a "model of a whole brain"?

The only "model of a whole brain" we have currently is the one of C. elegans, and even that's debatable because it does not model chemical processes which modulate neural activities, and therefore does not provide an accurate input/output mapping (in general).


> thermal model of a brain

No, it doesn't predict activity from parameters.

> C. elegans, and even that's debatable because it does not model chemical processes

"the map is not the territory". This sort of criticism isn't even scientific


A thermal model sure does predict brain activity from parameters. I can say with high confidence that the brain will have very little activity at 0 degrees C.


That's a narrow range of identifiability with no relevance to the data we collect on a daily basis or the scientific questions being asked (beyond of course that it is a pedantic example)


the map is not the territory

Not sure what you mean.


https://en.m.wikipedia.org/wiki/Map%E2%80%93territory_relati...

For example, we have a whole brain model of rest state dynamics that can represent aspects of data (eg graph metrics such as degree) from multiple sclerosis patients, distinctly from healthy controls. Here, the map is the model parameters, and the territory is the patient's actual brain. The map is deemed useful if its structure represents that of the territory.


Whether you like it or not, when you say "whole brain simulation" most people will interpret it as a fully functioning model of an entire human brain [1]. If you mean something other than that, you should be more specific (e.g. "a whole brain model of rest state dynamics").

[1] https://en.wikipedia.org/wiki/Brain_simulation


I don't have an opinion on the subject, but this is what we call it in this domain (and HBP, the subject of the original article)


As an example, do they include the glial cells?

http://www.scientificamerican.com/article/the-root-of-though...


No, not yet


I think you're ignoring the difference between a simulation (or model) and an emulation. A bottom-up approach produces an emulation.

AFAIK, nobody has ever claimed to be trying to produce a human whole-brain emulation in the style of https://neurokernel.github.io/. It's a problem that is obviously intractible right now.


I doubt most people reading the phrase "simulating a human brain" will make the distinction between simulating and emulating. The distinction is very technical and simply does not exist in the everyday use of the word "simulate".


As a layman, I imagine the simulation would have to account for any one of the thousands of different compounds and chemicals in the brain where emulating it might be easier but a lot less useful, like testing drugs on mice versus humans, never knowing what real difference will make an important one.


And yet the distinction is important as we are discussing a project on the very topic.

I don't think anyone playing SimCity thinks there's a whole real city running on their CPU. And certainly no one has a heart attack if you say SimCity is a simulation of a whole city.


If any model counts, then I can "simulate a human brain" as a sphere on a stick figure given five minutes in a physics sim.

We need to have some standard, and I would say the bare minimum is modeling each neuron in an ultra-simple way along with the connections between them.


Depending on what your doing modeling individual nerve cells may be pointless. For example, a stroke simulation may care a lot more about blood flow than communication between nerves.


It's certainly a useful project but don't call your blood flow simulation or vague activity map "simulating a human brain".


What if it turned out that neurons are highly redundant, and you can very effectively model network activity at a statistical level well above that of the individual neuron?


I've spent a few years studying biological modelling and simulation (of plants) but I've never heard of this distinction. Is it specific to neuroscience? What exactly is the "bottom" in a bottom-up approach?


> What exactly is the "bottom" in a bottom-up approach?

Great question (to which I have no idea what the answer is). It seems to me that the various attempts are trying to simulate sets of emergent behaviours at different levels of the brain's "stack". I guess, one way of determining the success of a "lower level" simulation/emulation, is the emergence of known "higher level" behaviours?


It's a distinction that physicists have brought into the field, the bottom refers to the lowest level of detail, usually cellular but sometimes molecular.


I don't think what you are saying is correct. Isn't an emulator something that takes the same inputs and produces the same outputs? I don't think "bottom-up" has anything to do with it.


You should take another look at Eliasmith’s work. The neural engineering framework aims to be biologically plausible, with some assumptions made to simplify the math. Cellular processes are not the focus of the model. It however attempts to replicate these dynamics with simplified “components”. It’s one of the only, if not the only, neural models that is consistently supported by experimental data.

Their website is https://nengo.ai and I am not affiliated with them just an alum that took his course.


I'm not against modeling aspects of the brain based on what we know, just don't call that "whole brain" simulation/model.


What would you call SimCity if not a city simulator?


It's a city simulator, not specific city simulator or even close. It is not accurate either as accuracy was not the main goal, but fun.

Human brain is a very specific and complex brain which we do not understand - vastly less than even cities at any scale. And to produce any useful answers, the model has to be accurate to some degree - and we don't know how much. HBP indeed had no question which it tried to answer - it's more like climbing a mountain because it exists.


I have no problem with calling SimCity a “whole city” simulator. Why? Because we have a good functional model of a city. We understand its operation on any level. Not the case with the brain.


As someone who works in computational neuroscience, frankly I don't see how Markram is any different from a charlatan.

Pretty much every researcher wants money to fund ambitious projects, but you can't just go around making all kinds of nonsense claims without being able to back them up. The HBP has produced some good research, but for the amount of funding they've received I would have expected much, much more.

Ultimately I think the fault lies within the funding structure. So much of scientific funding incentivizes bold and brash statements and plans, but very little consideration is given towards having the technical ability to execute those plans.

For a project of this size and scope, I would have wanted to see significant investments in computational infrastructure and software/hardware development very early on. At least the investment would have yielded something to build upon. Imagine if the Apollo mission tried to go to the moon with parts built from a Yugo. It wouldn't matter how many smart mathematicians you hire, or how skilled your navigators are, you just aren't going to get to the moon. Unfortunately computational infrastructure just isn't sexy enough to invest in, and I would say as a whole (there are some exceptions), neuroscience leadership just does not have the technical expertise to handle these kinds of projects yet.


I think the focus on Markram is detrimental to the EU community making good use of the people and projects left in the wake of the anti-Markram movement.

Still I agree infrastructure is a really hard problem, which is not going to get solved anytime soon. Just as an example, federated identity across HPC resources isn't even on the table, even though they are working on the next funding iteration. This means you need 3+ accounts to get anything done in the HBP way.


it was rather seen as a way to fund ambitious computational neuroscience projects otherwise unimaginable.

I don't mean to be petty, but I read that with a very intellectually dishonest tone: By misleading the goals, we secured funds to do what we really intended which wasn't sexy enough:

Thats grounds to call in the research audit and close things down.


I'm more curious why the "European Commission" is even giving extremely ambitious (aka extremely risky) research projects $1 billion dollars. That seems like a lot of money for any single project.

> In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time).


It's essentially a way of outsourcing management structures. Instead of having to handle and process 1000 $1M projects (and probably 20000 proposals) it awards a $1B project that is then in charge of managing that money and awarding subprojects, etc. It's also a good way of getting press and making sure that a fair amount of$ ends up in the pockets of your buddies working on a specific topic.


This is why the US NIH gives out a lot of $3-5M R01 grants.


Why the scare quotes around European Commission?


The scare-quote type of usage for quotation marks is common in English but not universal across all languages. As many Hacker News commenters are not native English speakers, it makes sense to assume quotes are not always used for that meaning.


I assume because this is the first time they've heard of it.


It would be far better to give one thousand risky cutting edge projects one million dollars.


The various European governments already do that kind of thing through their national agencies. The thinking behind these giant EU projects is to fund larger-scale ambitions over a decade or more. (Not saying it's been the best use of money so far, but that was the aim anyway.)


Reminds me of all of the "innovation" grants and credit schemes the Canadian government keeps trying to do to "foster innovative companies and entrepenuership" and "create jobs" but on a much, much bigger scale. In Canada almost all of it gets siphoned off by useless projects started by ex-executives from big name mega-companies (ie IBM) or by people who are experts at navigating gov grant processes. Meanwhile the actual startups who lack access to angel/venture capital in Canada rarely see any of it and continue to flock to America.

10yr run ways and massive amounts of capital sounds like the perfect recipe for black hole a money pit.

I'd rather leave 'innovation' to academia and private industry. If they want to help then make those two things easier, don't try to do it yourself (like trying to pick the winners).


And the European Commission also does it, see ERC grants. Giving risky cutting edge programs 1 million dollars is almost their definition (except that it's 1.5 to 2M).


I'd say that in this respect most researchers operate very similarly to startups that are often featured on HN. Seems a bit unfair to say the research should be closed down without saying the same thing, e.g., about Uber/Lyft, Doordash, or even Amazon (insofar as they are all promising to revolutionize something and make money sustainably)?

That is not to say that the HBP wasn't widely perceived as ridiculous from the outset by people with relevant technical knowledge. I think the bottom line is that we should respect scientists that are able to cast an audacious vision, raise crazy amounts of money, and then actually deliver, e.g., Christoph Koch/Allen Institute.


Publicly funded research must be held to a higher standard. Investors can throw their own money at whatever, but when you're throwing someone else's money, there's a duty to actually communicate to the public what you're spending it on.


Investors being wasteful with money is a good argument for them not having it in the first place (and having a more progressive taxation system dustribute that money). There is still a cost to society of investors throwing away "private" money.


My current favorite evidence for this cost is Gates' horrendous attempt at reshaping education

https://www.washingtonpost.com/news/answer-sheet/wp/2018/06/...


it's pretty standard in granting to overpromise like this and if you don't do it you will be at a disadvantage relative to your peers.


I recently saw (1) and this brought it to mind - as someone who has written a few funded $250-500k grants, it's a very familiar feeling, and I assume it only gets worse when you get to RO1 and "infinite euromoney" levels.

1) https://www.reddit.com/r/chemistry/comments/cgy6uz/comment/e...


On my first RO1, it was returned unscored (basically not even reviewed). It was a clever idea that was ahead of its time. the next year I noticed that the same section had several proposals which were effectively copies of mine written by more experienced PIs and at least one of them was funded even though they had no expertise in the area.

I left academia shortly after that (no interest in that competition) for FAANG and got more and better research done in my 20% time than I ever did in academia. I've spoken to many program managers who would love to fdund clever new ideas from creative young PIs, but are under strain to produce a reliable stream of papers from more experienced PIs who train postdocs.


If you read the article, this audit and redtructing of the project has and is occurring.


> Neuromorphic chips have been ever present within HBP

What kind of neuronal / plasticity models do these chips implement? If they are anything like what others are doing with integrate and fire neurons, these do not seem to align with the goals of the project. You need detailed, compartmental biophusical simulations and not abstracted, approximate models of neurons. Otherwise, we are already able to run such kinds of models with a large cluster.


Disclaimer: Working on neuromorphic chips.

Several options available but followings are superior to others in terms of capturing the real neuron/synapse behavior with a simple and compact VLSI implementation.

For neurons: Exponential adaptive integrate and fire models @ real-time.

For synapses: STD/STP + LTP/LTD circuits @ real-time.


Yes thats what i mean. IF models are approximations. I thought the HBP was implementing compartmental simulations. Also the models we have for LTP are at best speculative


Every model is an approximation :) I don't work at HBP, therefore can't speak for their implementations. These are the SoTA of nm engineering. What do you mean by compartmental simulation?


Multi compartmental models of neurons simulate the local voltage dynamics of membrane in detail , by considering the neuron as a graph of connected cylinders , and uses the cable equation along with hodgkin-huxley-like models to model the ionic channels of the membrane. It's the most detailed and closest to reality way to simulate neuronal dynamics. Most neuromorphic chips simulate much more abstracted, integrate and fire neurons. In both situations, plasticity is the big unknown because we know so little about it (and it's so complex).

https://en.wikipedia.org/wiki/Multi-compartment_model


> It's the most detailed and closest to reality way to simulate neuronal dynamics

Nope, you can keep going with the same logic: HH is just an approximatiin of the molecular kinetics and what you really need are the FEM models in 3D with all the protein pathways etc etc.

But this leads to an entirely intractable project. The science lies not in reproducing the exact neurons bug for bug, but keeping only the necessary details, within technical possibility, for explaining some observed phenomenon. LIF and AdEx neurons seem like a good compromise for neuromorphic hardware.


the HH model is good enough to reproduce almost everything that is recorded from neurons (Thats why H & H got the Nobel prize after all). Even dendritic regenerative spikes are reproduced by such models. I think it's generally acceptable that one does not need to do molecular dynamics to recreate membrane voltages. LIF and AdEx are very crude approximations though (i.e. no dendritic spikes, no plateaus, impossible to use them to approximate compartmentalized calcium levels that induce plasticity). And if you go that route, you have to justify why they are a better choice than e.g. Izhikevich neurons or indeed just sigmoid units.


> I think it's generally acceptable that one does not need to do molecular dynamics to recreate membrane voltages

everyone draws the line somewhere, this is yours. Even we take this statement to be true, you still have metabolic networks changing transmitter concentrations, dendritic arbors evolving in entirely unidentifiable ways, etc. The goal of modeling is not to say that every detail is there, but ones relevant to account for specific feature of data.


While there is more detail that can be simualted, there is a vast very successful literature using compartmental models. Markram's work was the simulation of a cortical column with compartmental models anyway.


The problem with this of course is the necessary level of detail depends on the phenomenon: simulating seizure propagation is probably different from simulating emotions. For more complex tasks we have no clue what that level is.


Somewhat off topic... but do you happen to know if the dendritic tree and its functional subunits in multi-compartment models can be treated mathematically as a multilayer network of simpler neurons?

I've been wondering if dendritic arborization means that current deep learning ANNs are hopelessly far away from biological reality, or if perhaps with deep networks simple ANNs could indeed learn to compute in a similar way to complex biological neurons, just over many layers of artificial neurons.


Absolutely, dendritic trees are generally considered active and can elicit dendritic spikes. This 2-layer model is well studied in hippocampal CA1 neurons for example[1]. ANNs are far from reality but they do validate connectionism as probably the correct abstract model of learning. The thing that's harder to crack is plasticity i.e. the learning rules. Plasticity in real neurons is a very complex process and there is no indication that anything like backpropagation takes place.

1 https://www.sciencedirect.com/science/article/pii/S089662730...

2 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4279447/

3 https://www.ncbi.nlm.nih.gov/pubmed/18270515


Humans have a terrible tendency to underestimate the complexity of reality, and the same stroke - diminish its beauty. The fact that we can look at a tree, or look into the sky with boredom is a great testament to this. With this same mentality, we've looked at a human brain and oversimplified it, and underestimated it. We're continually finding that reality is more complex than we thought.


Gunther Stent made an interesting comment about this: "I believe that science is, by nature, reductionist, but I also believe that reductionism will not carry us all the way. One of the reasons why I think science will eventually peter out is because you must always explain some higher level in terms of some lower level - that's what scientists have to do. But I think that when finally we get to sufficiently complex things, this will not be possible."

That's one perspective. I do wonder how deep we'll get with understanding and simulating the brain.


If by "sufficiently complex things" Stent means "things like consciousness," I found Max Tegmark's hypothesis in "Life 3.0" compelling: that just as information can arise from an arrangement of matter (data on a magnetic disk), consciousness might arise from an arrangement of information.


What would replace science? Some sort of enterprise of systemic analysis?

"We see all the dots science has shone a light on. Let's now figure out how they're connected and how they affect each other."

That would be cool.


Statistical models fed by massive data that no single human can fully understand are still “science”, but it’s definitely a very different paradigm for science than what we’ve had for the past few centuries, and it seems to be what we’re heading towards.

Eg a few decades from now, 24/7 biometrics monitoring at massive scale + machine learning might let us predict complex conditions like Alzheimer’s, Parkinson’s, various types of cancer, etc very accurately and very early, yet we might not be any closer to understanding what causes them at a fundamental level in the first place.


> Humans have a terrible tendency to underestimate the complexity of reality, and the same stroke - diminish its beauty.

True, but this is like optimism. With the motivation to go through life and do things, an optimist can afford unrealistic goals. Sometimes, that is what pays off.


True.. old folks like to mention that kids take things for granted.. if they didn’t they would just be staring at cars go by all day going “wooooowwwww.... you guys seein this shit??”


I'm constantly amazed by the ability of TED organizers to convince people to pay to listen to nonsense. How do they do it? Sure there a few gems, but overall the content quality is equal to TV infomercials selling snake-oil nutritional supplements and get-rich-quick schemes. Is everyone afraid to say that the emperor is naked?


If people paying for TED talks wanted to learn, they'd enroll in an evening class at their community college.

The purpose of TED talks is not learning. It's getting all the dopamine hits that come with learning, without actually putting in the work.


That's a weird criticism. Even when your goal is learning, unless there are no better options it's unusual to take an entire course in something you have only passing interest in. A community college only has so many courses; not very many will match up with the passions of the average person.

(This is ignoring people that take a course for social reasons.)


Are you lumping TedX into that assessment? Those are independently organized and are mostly terrible.


You're absolutely right about TEDx, but I'd apply GP's comment to TED proper as well. They're inspirational and sometimes aspirational, even if it costs them rigor. They give you the feel-good talking points without ever delving into any challenges or complications involved with the topic of the talk.


It’s an excellent place for hucksters to thrive with pseudo-intellectual optimism. Like clickbait it’s always - a mix of some highly accessible topic with something either surprising, controversial, or in some way challenging the norms.

They’re not all bad obviously but it’s not a good place for reasoned, moderate, and calculated predictions, which is the plane where most science or businesses live and die on. When your big ideas hit reality.

Theranos probably had the best TED pitch around, if they ever did one.


Ambitious projects and optimistic predictions have seemed not-unusual among big-name AI people, since the start.

Marvin Minsky predicted emergent behavior and recursive effects much sooner. (I think this was in all honesty: not realizing how complex the human brain, and not knowing how hardware would develop.)

Douglas Lenat apparently thought commonsense knowledge and reasoning would be easier.

Ray Kurzweil was often making predictions about hard-AI accomplishments just around the corner, which we're starting to see much later.


Interesting note about Ray Kurzweil, his predictions on when humanity accomplishes human brain simulation don't predict we will achieve this until the next decade. (2020-2030)


Which is still way too early.

Everything we thought was just around the corner is not: flying cars, self-driving cars, AGI, cold fusion, the singularity...

Hype cycles are a thing. I for one don't think we'll see any of these in our lifetime.


Good self driving cars maybe but if you don't mind an iffy one here you go: https://www.youtube.com/watch?v=aaOB-ErYq6Y


Kurzweil is an interesting case. His predictions have been impressively wrong for the most part, yet he seems to refuse to admit it.


While Kurzweil is cranky in many areas like his 100 pills a day, his extrapolations of Moore's law type growth patterns seem quite sensible - here's the log graph, let's get a ruler and extrapolate the pattern continuing. Not really rocket science and you don't need Kurzweil to do it but he's perhaps the leading populizer of the approach.


I was in this field. In 2012, tt was already seen as a crazy accomplishment in showmanship and fundraising.

Everybody in the field thought this was irresponsible and foolhardy, but good for the field overall.

Long story short, I left for machine learning as everybody else without wet lab experience also did from my lab.


I think a lot of it has to do with tribal politics from a the outset of the project that were never reconciled. The goal of the project was ambitious and inspiring, but such project requires contribution from everyone in the neuro community. Sadly it has not delivered usable output as a project. It has not produced landmark datasets and i have not found any important software to use from it. I still believe the vision is valid but it needs an Apollo-level focus in execution


After the community push back in 2015, this is no longer true. Most of the software is being open sourced, and the data from the BigBrain project is already open and will complement similar sized data from Allen Institute and the HCP.

I work for a work package leader for the next round of funding, and the European Commission seems to be demanding a Apollo level focus nowadays.


How can they get Apollo level focus if they can’t get early NASAs complete buy in from academic and private industry (across multiple nato countries).

It seems it’s having trouble even getting credibility with in neuroscience. Which is absolutely necessary if you want the level of talent that Apollo required.


The neuro community correctly viewed this project as a massive scam. Blaming them for not going along with it seems shortsighted to me.


I think criticism was overblown as well. The many thousands of small electrophysiology projects testing a single hypothesis also cost a ton of money and mice lives, and do not have ambitious goals. A concerted, centralized effort often breeds spectacular results.


I love the quote at the end:

> In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. You’re going to swallow a pill and know English. You’re going to swallow a pill and know Shakespeare. And the way to do it is through the bloodstream. So once it’s in your bloodstream, it basically goes through it and gets into the brain, and when it knows that it’s in the brain, in the different pieces, it deposits it in the right places.”

I'm on the fence about whether people like this are delusional and actually believe that someday we will swallow a pill that can teach us language or whether they're deliberately pumping hype for funding. I lean toward the latter given that this idea is so deeply and thoroughly absurd I can't believe anyone with any knowledge at all of learning, neuroscience, or information theory could possibly believe it. It's much more ridiculous than the "water memory" claim around homeopathy given that the amount of information we'd be talking about here is orders of magnitude beyond what one might imagine a homeopathic cure would need to convey (assuming homeopathy worked).


I highly recommend Kurzweil's "The Age of Spiritual Machines" for many predictions like these, with a detailed timeline. According to him, being able to simulate human-level intelligence on a computer is simple matter of having enough processing power, which he estimates at 20 petaflops. He also predicts we'd be able to buy 20 petaflops for US$ 1k around 2009, IIRC. (Edit: had to check - he actually wrote 2020. Looking forward to it.)

(It's a fun book though, I like it. Except for the creepy bits on cybersex.)


> According to him, being able to simulate human-level intelligence on a computer is simple matter of having enough processing power

I've never understood this argument. It seems about as logical as claiming that the human brain uses about 20 watts of power and so if we can make a computer use 20 watts, it'll become conscious.


That sounds like a classic example of assuming a sigmoid curve is purely exponential.


> is simple matter of having enough processing power

Isn't this provably false though ? I don't think the openworm project has gotten anywhere near simulating C.elegans.


One thing that hasn't been mentioned in the article is how well the smaller brain models performed compared to the real thing - which is kind of crucial to our understanding of many of the encapsulated questions and assumptions.

Even something relatively simple like "We managed to convert the optic nerve to a signal and strobed the poor mouse into a seizure - would the model mouse brain also show similiar brain reactions?"

If you have the neurons mapped but not the biochemistry the divergence could be very telling even if countless "emulation bugs" occur from the implicit assumptions that lead to the states.


Not even close. There are so many pieces and parameters missing that large scale simulations are useless for anything more than “we saw some oscillations that vaguely resemble some recorded oscillations “. It was the hope that HBP would fill in those blanks


I think I've been reading the same article about how AI and the singularity is just around the "five year corner" for like at least 20 years now.

I worked very briefly with some folks who do ML engineering and they were disillusioned to the point of nihilism. They claimed that most of their day was just spent fiddling with weights until they got the magic number their bosses wanted to see, and they were all looking to get out of the field.


Was this a sub-form of the "emergent behaviour" school of intelligence? Connectionists?


Yes, it hasn't succeeded, but progress in this field of science are incredibly fast. Neuromorphic computers are evolving fast. Of course, a human brain is very complex and performing, but I think that I will be alive to see singularity.


People have been promising artificial intelligence since 1956 https://en.m.wikipedia.org/wiki/Dartmouth_workshop


Peter Thiel discusses this often, and recently on The Portal with Eric Weinstein-- where all these sorts of breakthroughs are perpetually right around the corner.

previous hn discussion https://news.ycombinator.com/item?id=20465053

youtube interview: https://www.youtube.com/watch?v=nM9f0W2KD5s&t=8941s


is the proposal for the billion euro funding for HBP public? it would be interesting to see.


I couldn't find the original proposal text, but you can read the final partnership agreement text on their website: https://www.humanbrainproject.eu/en/about/governance/framewo...


what if is impossible to simulate a brain without ending creating exactly another one, brain use tiny space and mass, and in that confined space is very efficient doing a lot of calculation that we are not even aware of, using the least amount of energy and dissipating the least amount of heat possible.

have you ever feel that your brain is suffering from overheating?


Perhaps the brain is an antenna. No matter how much you simulate an antenna your simulation will never give you a nice TV signal.


Not until you simulate a second antenna and electromagnetism.


While unsubstantiated, I love the analogy, I hadn't thought of that possibility.


I've wondered about this but the fact that brain damage can alter personality so drastically makes it seem implausible.


Running into a tree can dramatically change how my car drives, but that doesn't mean the car is driving itself.


In the meantime, business and PR geniuses went the other way and were successful in lowering the standards far enough that a machine that adds 2+2 and sometimes arrives at 4 is an AI.


HBP has not much to do with AI/ML though the latter are used for model building within HBP


Are you sure you don't mean "a quantum computer"?


I'm not sure that would be an accurate portrait of quantum computers, at least when applied to a deterministic problem. I was referring to the inflationary use of the “AI” moniker.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: