Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1. There's this huge misconception that LLMs are literally just memorizing stuff and repeating patterns from their training data

So then, what are they doing?

I'm seeing people creating full apps with GPT-5-pro, but nothing is novel.

Just discussed the "impressiveness" of it creating a gameboy emulator from scratch.

(There's over 3500 gameboy emulators on github. I would be suprised if it failed to produce a solution with that much training data).

Where's the novel break-throughs?

As it stands today, I'm sure it can produce a new ssl implementation or whatever it has been trained on, but to what benefit???



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: