Hacker News new | past | comments | ask | show | jobs | submit login
Apple Researchers Show Critical Flaw in AI (latimes.com)
12 points by markgavalda 3 months ago | hide | past | favorite | 8 comments



LLMs turn traditional computing upside down.

Instead of very accurate results at low cost, they produce inaccurate results at high cost.

Generalized intelligence and reasoning are not achievable by brute force statistical simulation --- regardless of the amount of money and hope invested/wasted.


Finally, the highly reputable science publication LA Times provides proof that LLMs are in fact large language models, rather than large math solvers or large fact models.


...large language models, rather than large math solvers or large fact models.

And why not?

Math is the most logical and precise language ever invented.

If LLMs can truly think and reason and understand, I would expect them to excel at math problems. Or at least admit that it can't do math and logic.


>If LLMs can truly think and reason and understand

But they can't, which was kind of my point. They are clever token predictors, they know language, which makes them really good text generators ("stochastic parrots"), but even trivial tasks like counting the letters in a word is hit-or-miss, especially if the solution is not found in their training data.

I don't understand why people find this surprising. It's remarkable that LLMs can solve some problems at all, not the other way around.


I posed this exact same test problem to a selected LLM.

It produced the correct answer --- including pointing out the irrelevance of the "smaller" kiwis.

Then I changed the person's name (John instead of Oliver) and jiggled the numbers a bit. It confidently produced an answer including an explanation with the irrelevance noted --- but it still did the simple addition wrong.

It obviously doesn't *understand* anything. It just regurgitates what it finds posted somewhere on the internet.

Frankly, I wouldn't trust anything it says. It's hard to imagine what this would be truly useful for --- propaganda maybe?

Billions and billions of dollars are being wasted on this.


I wonder if AI being really useful when it comes to programming caused some to miscalculate its usefulness in general.


I wonder if watching too much science fiction caused some to miscalculate its usefulness in general.

Expecting real intelligence to "emerge" from a binary logic playback device (aka a computer as we know) is just a variation on the Infinite Monkey Theorem in my opinion. In other words, the odds are not quite zero --- but they are very near it.

https://www.sciencealert.com/scientists-confirm-monkeys-do-n...


I think a lot of people overestimate their usefulness there, too, tbh, possibly because they’re new and shiny. In actual use the AI things feel more like having a massively over-confident intern; trouble is, interns learn (that’s kind of the whole point). The magic robot does not. One could question how useful having an eternally overconfident yet incompetent intern is.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: