Michael Edward Johnson has an interesting theory called vasocomputation with the core hypothesis:
> vasomuscular tension stabilizes local neural patterns. A sustained thought is a pattern of vascular clenching that reduces dynamic range in nearby neurons. The thought (congealed pattern) persists until the muscle relaxes.
This person is part of a group that seemingly posts to twitter and cites twitter as statement against academic institution and traditional scholarship. Some members of the group are academics themselves. Maybe some things can only be posted there, and so your intuition is correct. But I think you misunderstand why specifically you are seeing twitter links so prominently used. My sense is it's attacking the politics of citation.
If that's true, that might also be part of why dreams exist. If the glymphatic cleanout during sleep is a rhythmic constriction wave across the brain, then whatever the brain is doing at the time the wave crosses it becomes a sustained thought as a side effect.
Maybe that's why sleep, in the sense of becoming immobile, is a thing. The brain disconnects a lot of the motor control so that the hallucination caused by sustained randomness from cleanout waves don't make us flail all over the place and hurt ourselves.
You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.
Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.
Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.
I'm not proposing hand-engineering anything, though. I'm proposing giving the AI tools, like a calculator API, a code interpreter, search, and perhaps a suite of narrow AIs that are superhuman in niche domains. The AI with tool use should outperform a competitor AI that doesn't have access to these tools, all else equal. The reason should be intuitive: the AI with tool use can dedicate more of its compute to the reasoning that is not addressed by the available tools. I don't think my views here are inconsistent with The Bitter Lesson.
yep, also i think while they could have issues with dataset sizes less than 2^k, it's interesting to note their use in accelerating clustering algos like dbscan. they do make neat visualizations though https://marimo.app/?slug=x5fa0x
They don't use bags, though it's always possible that that's an adaptation of the original story. It seems more likely that there is cultural variation in traditional stories.
There is no need to make it impossible for a monster to target adults. You can just say that they target children.
I disagree with this. If you give GPT information that was not part of its dataset and ask it to make question and answer pairs off of that information, you are adding higher quality breadth to the training corpus.