Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Altman decided to let GPT-5 take a stab at a question he didn’t understand. “I put it in the model, this is GPT-5, and it answered it perfectly,” Altman said.

If he didn't understand the question how could he know the model answered it perfectly?



Pay close attention to these demos. Often the AI is ok but not amazing, but because it’s shaped like the right thing they don’t look any deeper.

It makes selling improvements fairly hard actually. If the last model already wrote an amazing poem about hot dogs, the English language doesn’t have superlatives to handle what the next model creates.


usually less perfect is a better sign of integrity


This statement is definitely just marketing hype, but if we're being pedantic there are tons of questions that are hard to answer but have easy to verify solutions, e.g. all NP-complete problems.


No, that is being generous towards marketing hype.

If we are being pedantic we could never accept "question we don't know how to answer" as a possible interpretation of "question we don't understand".


There's also nothing here that wasn't true of GPT4 so why is bragging about it for GPT5 something notable?


>If he didn't understand the question how could he know the model answered it perfectly?

Also, 'thing that I don't know about but is broadly and uncontroversially known and documented by others' is sort of dead center of the value proposition for current-generation LLMs and also doesn't make very impressive marketing copy.

Unless he's saying that he fed it an unknown-to-experts-in-the-field question and it figured it out in which case I am very skeptical.


There's plenty of things to roast Altman about, but this isn't really one of them. A specialized problem might not be understood by someone unversed in that field even if the solution is simple and knowable. "What is the Euler characteristic of a torus?" for a rudimentary off-the-cuff example. Altman could easily know or check the answer ("0") without ever understanding the meaning of the question.


There's nothing new in your claim (or his) that wasn't 100% the case with GPT4. This is Altman's brag on GPT5, not generative AI in general, so it's gotta say something it couldn't about GPT4 or it's just bullshit filler.


Maybe, after thinking for a really long time, GPT-5 said "42". The answer might have been so shocking to Altman that now he'll have to build GPT-6.

But more seriously - it's a ridiculous statement to think you understand the answer when you don't understand the question in the first place..


It takes a really special kind of self-delusion to recognize that you don't understand the question and also think you are qualified to evaluate the answer.


I can only assume that as part of the GPT's answer came a thorough explanation of the question, which meant that dear Sam got first to understand his question, and then could read further to see that the answer was good. One can dream, or at least that's what he wants us to do.


Couldn’t someone who does understand it verify for you?


World class grifter grifting hard




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: