Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

May you please do me (and others) a favor:

What value does an LLM hold intrinsically.

Lets say "brought an LLM from today"

Does that mean just a multi gig file? What is INSIDE the LLM that would be of value? How does one speak to an LLM WRT 80's tech, and what could one glean from it....

ELI5 an LLM;

BARD: https://i.imgur.com/ahRVECz.png

OpenAI: https://i.imgur.com/Rbk5BD6.png

Bing: https://i.imgur.com/zVJ1tu6.png

--

So, how would one explain 80s folks what even an LLM is when we cant even ELI5 2024?



I don’t think they’d have any trouble with the math, it’s just a bunch of regressions and matvecs, right?

I think the process of collecting and storing all the data would be more mind blowing to them—of course they were at the beginning of Moore’s law, so they could see the trajectory if they looked for it, but it is one thing to stand on the coast with waves lapping at your ankles and imagine how the ocean gets deeper as you keep going and another to get chucked out of a helicopter in the middle of the Pacific.


In a way that someone in the 80s could understand?

An LLM is a very highly compressed store of knowledge combined with an advanced parser than understands questions in plain English. A consequence of the compression is that sometimes the answers lose some accuracy, which is a deliberate trade-off to make it work at all.


Neural networks were known about in the 80s, they were theorised about in the 1800s ffs, and the first computer based NNs were in the 1950s.


Now paint me a picture of a cat. Good LLM.


LLMs don't paint.


My story plot would sure include LLM (coefficients file as today) + code to run it. So 80s humans could run it on the Cray, ask it questions, and get answers (after some time :D).

LLM could explain itself what it is.. (if there are not more important questions to ask, contention would ensue).


LLM is a lossy compression of the internet. We can provide it in a form that is directly executable on 80s computers though gpt4 tries to convince me that it is practically impossible the reduced model would be much weaker (somebody doesn't want to be sent to 80s ;)


Yes, that means just a multi gig file.

The hard part of LLMs (and current AI in general) is training, which is orders of magnitude harder than inference.

If somehow we had a way to travel to the future in the 70s, train the models and then come back, we would be in Star Trek right now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: