Hacker Newsnew | past | comments | ask | show | jobs | submit | lukeplato's commentslogin

hopefully nuclear power plants


Michael Edward Johnson has an interesting theory called vasocomputation with the core hypothesis:

> vasomuscular tension stabilizes local neural patterns. A sustained thought is a pattern of vascular clenching that reduces dynamic range in nearby neurons. The thought (congealed pattern) persists until the muscle relaxes.

https://opentheory.net/2023/07/principles-of-vasocomputation...

https://x.com/johnsonmxe/status/1863603206649208983


The first words of this article are (EDIT: The article in the parent comment, not the HN link)

> A unification of Buddhist phenomenology,

The citations include Buddhist writings and an entry that just says "Collected Twitter threads"

This is not serious science. This is Twitter-era new age mysticism with a dash of scientific words.


This person is part of a group that seemingly posts to twitter and cites twitter as statement against academic institution and traditional scholarship. Some members of the group are academics themselves. Maybe some things can only be posted there, and so your intuition is correct. But I think you misunderstand why specifically you are seeing twitter links so prominently used. My sense is it's attacking the politics of citation.

The politics of citation, to be clear, are actually quite fucked up and elitist. E.g., Feminist Approaches to Citation https://cmagazine.com/articles/feminist-approaches-to-citati...


edit: my mistake


> You must be confused. The words Buddhist or Twitter do not appear anywhere in this paper.

I'm referring to the link in the parent comment: https://opentheory.net/2023/07/principles-of-vasocomputation...

It has "Buddhist" in the URL, the title, and the first sentence.


If that's true, that might also be part of why dreams exist. If the glymphatic cleanout during sleep is a rhythmic constriction wave across the brain, then whatever the brain is doing at the time the wave crosses it becomes a sustained thought as a side effect.

Maybe that's why sleep, in the sense of becoming immobile, is a thing. The brain disconnects a lot of the motor control so that the hallucination caused by sustained randomness from cleanout waves don't make us flail all over the place and hurt ourselves.


Although, not without fail: https://en.wikipedia.org/wiki/Hypnic_jerk


What. This is personally mind-blowing if true.


Holy shit, what a load of new age crackpottery.


I don't see why a mixture of experts couldn't be distilled into a single model and unified latent space


You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.


Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.

Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.


I'm not proposing hand-engineering anything, though. I'm proposing giving the AI tools, like a calculator API, a code interpreter, search, and perhaps a suite of narrow AIs that are superhuman in niche domains. The AI with tool use should outperform a competitor AI that doesn't have access to these tools, all else equal. The reason should be intuitive: the AI with tool use can dedicate more of its compute to the reasoning that is not addressed by the available tools. I don't think my views here are inconsistent with The Bitter Lesson.


Exactly what DeepSeek3 is doing.


reading this thread was a good reminder that being intelligent but closed off to alternative hypotheses is the same thing as being ignorant


This could make for an interesting UI for exploring clusters in data. I only wish K-d trees could handle higher dimensions


You'll love this site :)

https://treevis.net/


What do you mean ? A K-d tree handles k dimensions. Generating a useful 2-D representation (=projection) of more dimensions is the hard part.


I remember reading that for k-d trees to be able to split on k dimensions the dataset needs to be > 2^k, which becomes unwieldy pretty quickly


… yes to the 2^k only because if not met, the performance devolves to a linear search. By themselves, k-d trees can handle any number of records.


yep, also i think while they could have issues with dataset sizes less than 2^k, it's interesting to note their use in accelerating clustering algos like dbscan. they do make neat visualizations though https://marimo.app/?slug=x5fa0x


For low dimensional (such as 2D) projections of high dimensional data, especially useful for visualization, take a look at UMAP - https://umap-learn.readthedocs.io/en/latest/


interatomic potential is a promising scale and there's been some interesting recent developments using equivariant graph neural networks: https://www.nature.com/articles/s41467-022-29939-5 https://arxiv.org/pdf/2206.07697


When is a good time to buy AAPL on the dip?


you should use a throwaway account without your name so as to not impact your company


The story distinctly says the sea monster uses a pouch for small kids, so they wouldn't have to fear it as they get older if that part is explained


There's a good children's book about Eskimo undersea child-snatching monsters: https://www.annickpress.com/Books/A/A-Promise-Is-a-Promise

They don't use bags, though it's always possible that that's an adaptation of the original story. It seems more likely that there is cultural variation in traditional stories.

There is no need to make it impossible for a monster to target adults. You can just say that they target children.

https://en.wikipedia.org/wiki/Qallupilluit


models trained on gpt output might be more distilled and specialized but it wouldn't be improving generalization



I disagree with this. If you give GPT information that was not part of its dataset and ask it to make question and answer pairs off of that information, you are adding higher quality breadth to the training corpus.

Phi-2 seems like pretty good proof of that.


that's the point, they get less good at everything, but really good at one or a few things

The real benefit here is

1. It's much cheaper and faster to train a bunch of specialized models once you have a single good LLM

2. You probably can't get the same capabilities from a specialized model by training it directly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: