Hacker News new | past | comments | ask | show | jobs | submit login

I don’t know about the human part, but we absolutely understand how LLMs do what they do. They’re not magic.



We understand the architecture but we don't understand the weights.


We also understand, down to a very very very microscopic level, how neurons work. We also know a helluva lot about the architecture of the brain. Does that mean we can explain our own intelligence, how our minds actually work? Nope.


No we don't. No it's not "magic". No we don't understand what the black box is doing.


For some values of “we”


For every value of we. "I understand the internals of GPT" is the fastest way to demonstrate you have no idea what you're talking about.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: