Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> imagined realities.

Imagined realities are a real part of reality.

> so LLMs are an approximation of an approximate model of reality?

Yes, and we as humans have a mental model that is just an approximation of reality. And we read books that are just an approximation of another human's approximation of reality. Does that mean that we are bullshit because we rely on approximations of approximations?

You're being way too pedantic and dismissive. Models are models, regardless of how limited and imperfect they are.



> Models are models

Random aside -- I have a feeling, dunno why, that you might enjoy this type of thing. Maybe not. But maybe. https://www.reddit.com/r/Buddhism/comments/29j08o/zen_mounta...

> Imagined realities are a real part of reality.

Now we're deeper into it -- I actually agree, somewhat. See above for deeper insight.

These LLM systems output "stuff" within our reality, based on other things in our reality. They are part of our reality, outputting stuff as part of reality about the reality they are in. But that doesn't mean the statistical model at the heart of an LLM is designed to estimate reality -- it estimates of the probability distribution of human language given a set of conditions.

LLMs are modelling reality, in the same way that my animal pictures image classifier is modelling reality. But neither are explicitly designed with that goal in mind. An LLM is designed to output the next most likely word, given conditions. My animal pictures classifier is designed to output a label representative the input image. There's a difference between being designed to have a model of reality, and being a model of reality because the thing being modelled is part of reality anyway. I believe it's an important distinction to make, considering the amount of bullshit marketing hype cycle stuff we've had about these systems.

edit -- my personal software project translating binary data files models reality. Data shown on a screen on some device modelled as yaml files and back again. Most software is an approximation of reality soup stuff. which is why I kind of don't see that as some special property of machine learning models.

> Does that mean that we are bullshit because we rely on approximations of approximations?

The pessimist in me says yes. We are pretty rubbish as a species if you look at it objectively. I am a human being that has different experiences and mental models to you. Doesn't mean I'm right about that! Which is why I said "I think". It's just my opinion they are bullshit machines. It is a strong opinion I hold. But you're totally free to have a different opinion.

Of course, there's nuance involved.

Running with the average of averages thing -- I'm pretty good at writing code. I don't feel like I need to use an LLM because (I would say with no real evidence to back it up) I'm better than average. So, a tool which outputs an average of averages is not useful to me. It outputs what I would call "bullshit" because, relative to my understanding of the domain, it's often outputting something "more average" than what I would write. Sometimes it's wrong, and confident about being wrong.

I'd probably be pretty terrible at writing corporate marketing emails. I am definitely below average. So having a tool which outputs stuff which is closer to average is an improvement for me. The problem is -- I know these models are confidently wrong a lot of the time because I am a relative expert in a domain compared to the average of all humans.

Why would I trust an LLM system, especially with something where I don't feel like I can audit/verify/evaluate the response? i.e. I know it can output bullshit -- so everything it outputs is now suspected, possible bullshit. It is a question of integrity.

On the flip side -- I can actually see an argument for these things to be considered so-called Oracles too. Just, not in the common understanding of the usage of the word. Like, they are a statistical representation of how we as a species use language to communication ideas and concepts. They are reflecting back part of us. They are a mirror. We use mirrors to inspect our appearance and, sometimes, to change our appearance as a result. But we're the ones who have to derive the insights from the mirror. The Oracle is us. These systems are just mirrors.

> You're being way too pedantic and dismissive.

I am definitely pedantic. Apologies that you felt I was being dismissive. I'm not trying to be. The averages of averages thing was meant to be a playful joke, as was the finite/infinite thing. I am very assertive, direct and kind of hardcore on certain specific topics sometimes.

I am an expression of the reality I am part of.

I am also wrong a lot of the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: