Scott Aaronson's "Ghost in the Quantum Turing Machine" tackled this question in a way I found enlightening. It's a bit of a dense read (I needed a plane trip to sit down and focus on it), but it presents a compelling case for the possible reversal of time arrows when it comes to the effects of quantum indeterminism on macro scale interactions. In short: what if humans, somehow, someway, have the ability to "influence" the resolution of unobserved universal start-state quantum noise in order to affect our "will" onto the universe? Furthermore, what experiments could be done to validate or invalidate aspects of this hypothesis?
That seems crazy - that my decision to have a ham sandwich for lunch had to be propagated all the way back to the start of the universe in order to happen ;)
And yes, provability is a major problem with all these kinds of theories. We don't have a parallel universe handy as a control, and just saying "I am not going to have a ham sandwich tomorrow" isn't a viable experiment.
I interpret it as a sort of entropic antennae... each person (brain? qbit?) is tuned into what might be considered an allocated "entropy bandwidth". Some decisions, like whether you eat a ham bandwidth tomorrow utilize* practically none of that bandwidth. Others, what you might consider "major life decisions" might utilize* a ton of bandwidth. Some people might be gifted with a "larger" antennae, allowing them to utilize* a higher amount of entropy per second, or per lifetime. We might call these great artists, or generally any genius-level intellectuals. Alternatively, bat-shit crazy people.
*: On "utilize". A central debate (the central debate?) seems to be what sort of "state access mechanism" this maps to in our lexicon: "{reading|writing} {global|local|shared} {bandwidth|memory}"
>had to be propagated all the way back to the start of the universe in order to happen ;)
Technically, casualty required that anyway. My understanding that it's infinity in space, not time that matters here. i.e. predicting your decision to have a ham sandwich may require predicting the entire universe and since we're part of the universe and can't look from 'outside' that decision would be unpredictable from inside the universe.
>And yes, provability is a major problem with all these kinds of theories.
I think showing that quantum effects are negligible for human brains would do, at least regarding your decision to have a ham sandwich?
Mmm, not really unpredictable, it is easy for thought exercises to avoid the dirty real world, but in the real world, Large Language Models have recently proven that many quite difficult abstract probability exercises, when taken to the real world and applying training, can be succesfully solved well beyond our most wildly expectations of precision, speed and accuracy.
So yes, the prediction of the decision to have a ham sandwich tomorrow, for any given person is now, by the state of the art of applied mathematics and information science, a relative doable - if not plain easy - feat.
And according to some hypothesis, the brain could be doing this exactly kind of predictions, even more better than LLMs, with more accuracy/speed using way less energy.
> the prediction of the decision to have a ham sandwich tomorrow, for any given person is now, by the state of the art of applied mathematics and information science, a relative doable - if not plain easy - feat.
Sure, the LLM may be able to guess by reasoning (e.g. 'He always eats ham sandwiches for breakfast and he probably wouldn't break his routine'), the same way we could guess at another person's behaviour, but we're talking about reducing the brain into a deterministic input/output machine, which is a far larger ask.
Now if you said 'doable' that may be right in the long term, but 'easy'? Absolutely not. There's no current way to feed 'me' or 'you' into an equivalent LLM. The human brain has more axons then there are stars in the galaxy and we are nowhere close to even mapping these connections.
>The brain could be doing this exactly kind of predictions, even more better than LLMs, with more accuracy/speed using way less energy.
That's definitely an option. The fact that many of the brain's operations could be done by LLMs is a strike against the original thesis.
https://arxiv.org/abs/1306.0159