Instagram tip: if you click the wordlogo “Instagram” at the top (in the mobile app), you can select “Following” and get a feed of only posts from accounts you follow, with no suggested posts and no reels.
I end up going through that feed in a few minutes and it insulates me from the endless scrolling.
Facebook mobile tip: if you click on the burger menu and select "Feeds" you will be taken to a page with a list of different feeds at the top. If you then select the "Friends" tab you will see only posts from your friends. Doesn't get rid of ads, unfortunately, but it does get rid of all the crap from recommended pages, etc...
You can't, and I've watched as they've added/removed UI to indicate that you can even press it. I'm glad the feature is there, but it's clear Meta doesn't want you finding it.
You're saying roughly "you can't trust the first answer from an LLM but if you run it through enough times, the results will converge on something good". This, plus all the hoo-hah about prompt engineering, seem like clear signals that the "AI" in LLMs is not actually very intelligent (yet). It confirms the criticism.
Not exactly. Let's say, you-the-human are trying to fix a crash in the program knowing just the source location. You would look at the code and start hypothesizing:
* Maybe, it's because this pointer is garbage.
* Maybe, it's because that function doesn't work as the name suggests.
* HANG ON! This code doesn't check the input size, that's very fishy. It's probably the cause.
So, once you get that "Hang on" moment, here comes the boring part of of setting breakpoints, verifying values, rechecking observations and finally fixing that thing.
LLM's won't get the "hang on" part right, but once you point it right in their face, they will cut through the boring routine like no tomorrow. And, you can also spin 3 instances to investigate 3 hypotheses and give you some readings on a silver platter. But you-the-human need to be calling the shots.
You can make a better tool by training the service (some of which involves training the model, some of which involves iterating on the prompt(s) behind the scene) to get a lot of the iteration out of the way. Instead of users having to fill in a detailed prompt we now have "reasoning" models which, as their first step, dump out a bunch of probably-relevant background info to try to push the next tokens in the right direction. A logical next step if enough people run into the OP's issue here is to have it run that "criticize this and adjust" loop internally.
But it all makes it very hard to tell how much of the underlying "intelligence" is improving vs how much of the human scaffolding around it is improving.
Yeah given the stochastic nature of LLM outputs this approach and the whole field of prompt engineering feels like a classic case of cargo cult science.
If you find this interesting, I strongly recommend the book _The Light Eaters_ by Zoë Schlanger [0]. She discusses this finding as well as other sense-abilities of plants. Recent science has found pretty amazing things.
If I recall correctly: flowers are often shaped like dish antennas to collect sound vibration, and plants can distinguish the frequency of wing beats of their preferred pollinator from frequencies of other insects, and will act only for their pollinators.
I listened to that book and enjoyed it. But that said, I'm torn between friendliness to the general concept, and skepticism based in part on the bias of proponents to deeply desire plants to display something like intelligence (a bias I share).
For example the most amazing claims in the book were around the ability of Boquila trifoliolata to dynamically mimic other plants.
i definitely agree that it would've been nice to have images in the book as it was hard to get a sense of exactly how well Boquila was mimicking neighbouring plants!
but in reference to the linked article, i will say that the researchers interviewed in the book (and i got that sense for Zoe as well) were in agreement with you that the research didn't support a vision-based mechanism. but everyone agrees that the imitation is going on. the researchers in the book suggest a gene transfer-based mechanism instead! (mentioned briefly in your linked article)
You seem to be on some campaign against atheists today, which needs to stop now. It breaks multiple guidelines and destroys what HN is for. It's great to ponder the big existential questions; I do plenty of it myself. But when discussing these topics here we need to find a way to do so without being derisive towards others.
That comment is actually pretty cool because it illustrates a thinking process that a lot of people that we share the world with have.
They split the world into two realms, the physical and the spiritual, and mental processes like hearing are the domain of the spiritual.
This can be dated back to Descartes who believed that body and soul are separate substances and even further back to antiquity.
In today’s world there’s mounting evidence for explaining all cognitive phenomena based on the physical world, so these people feel under assault all the time.
OP’s statement about „atheists” reveal his feelings towards them, and his way of trying to understand them.
It’s fascinating how these old ways of thinking manage to stick around despite all or advancement in science. And it’s crucial to be aware that they exist.
Do you think that if we simulated a bee, molecule for molecule, that this simulation would behave differently from the real thing because the simulation fails to replicate its soul?
What kind of behaviors in animals/humans does this soul affect? How do you believe does it interface with nervous systems in general?
I thought embeddings were the internal representation? Does reasoning and thinking get expanded back out into tokens and fed back in as the next prompt for reasoning? Or does the model internally churn on chains of embeddings?
I'd direct you to the 3 blue 1 brown presentation on this topic, but in a nutshell the semantic space for an embedding can become much richer than the initial token mapping due to previous context.. but only during the course of predicting the next token.
Once that's done, all rich nuance achieved during the last token-prediction step is lost, and then rebuilt from scratch again on the next token-prediction step (oftentimes taking a new direction due to the new token, and often more powerfully any changes at the tail of the context window such as lost tokens, messages, re-arrangement due to summarizing, etc).
So if you say "red ball" somewhere in the context window, then during each prediction step that will expand into a semantic embedding that neither matches "red" nor "ball", but that richer information will not be "remembered" between steps, but rebuilt from scratch every time.
There's a certain one-to-oneness between tokens and embeddings. A token expands into a large amount of state, and processing happens on that state and nothing else.
The point is that there isn't any additional state or reasoning. You have a bunch of things equivalent to tokens, and the only trained operations deal with sequences of those things. Calling them "tokens" is a reasonable linguistic choice, since the exact representation of a token isn't core to the argument being made.
> Eink mode is not a closed-file format reader but rather a form of responsive web design (RWD) integrated into the website itself.
suggests to me that it's a display mode you have to enable in the CSS design of the site, like "print" layout. I.e. this particular version is not software you can use on any site, it has to be baked into the site.
Reportedly [0], SoftBank is looking to invest $40B for a share of OpenAI.com that would value it at $260B.
Given that OpenAI.org currently wholly owns OpenAI.com (doesn't it? even Msft didn't get a stake?), I have trouble seeing (a) how the $40B valuation would stand up in court if anyone could challenge it (and I believe Musk is trying to do that?), but also (b) how Musk's $100B pushes up the valuation more than a SoftBank investment would do.
Wikipedia says his great-grandfather’s name was also Elon. Compelling as the other theory is, this seems much more plausible as the reason for his name.
> Their first child, Elon Reeve Musk, was born in 1971, named after Maye's grandfather J. Elon Haldeman, with the name Reeve after her maternal grandmother's maiden name.
I end up going through that feed in a few minutes and it insulates me from the endless scrolling.
reply