Hacker Newsnew | past | comments | ask | show | jobs | submit | more outside1234's commentslogin

The next step will be for OpenAI to number their releases based on year (ala what Windows did once innovation ran out)


Windows 95 was a big step from the previous release, wasn't it?

And later, Windows reverted to version numbers; but I'm not sure they regained lots of innovation?


People at OpenAI are the top of their field. It is not sloppiness in this crowd.


People at the top of their field can be deeply sloppy at times.


I mean it in the kindest way, but scientists might be the sloppiest group I've worked with (on average, at least). They do amazing work, but they're willing to hack it together in the craziest ways sometimes. Which is great in a way. They're very resourceful and focused on the science, not necessarily the presentation or housekeeping. That's fine.


Communication is a big part of science, so it's not great that scientists fail in this area


This was a big COVID-era lesson; that places like the CDC and NIH and whatnot really need a well-trained PR wing for things like Presidential press conferences, to communicate to the public.


This isn't scientist sloppy, this is salesperson sloppy. Very different.


The engineers, sure. Product team... well, we've seen the past 2-3 years that AI isn't necessarily based on quality and accuracy. They are also at the top of their game in terms of how to optimize revenue.


Just because you're the best, doesn't mean you're any good


I don't think the PR people at OpenAI are at the top of their field.


Honestly? They might be.


Their field is pretty much selling sloppiness-as-a-service, tho.

I'm genuinely a bit concerned that LLM true believers are beginning to, at some level, adopt the attitude that correctness _simply does not matter_, not only in the output that spews from their robot gods, but _in general_.


It's kinda crazy to witness, you can see in the main GPT-5 release thread that there are people excusing things like the bot being blatantly wrong about Bernoulli's Principle in regards to airplane flight. I wish I could find it again but it's thousands of comments, one of the comments is literally "It doesn't matter that it's wrong, it's still impressive". Keep in mind we're discussing a situation where a student asks the AI about how planes fly! It's literally teaching people a disproven myth!


I'm going to put a vote in for Scam Altman on this one


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and we've asked you many times to stop.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


There is a smell of desperation around OpenAI, so I wouldn't be surprised if this level of hypevibing came from the top.


Don’t worry, they will be able to get plenty of grants for content promoting Trump’s businesses


Trump sucks


So true. There is no reason to join a series B and later startup in terms of compensation.


Rust is an amazing language once you get over the initial mental hurdle. An important thing to go in with: 99% of programs should not require you to manage lifetimes (‘a notation) If you find yourself doing this and aren’t writing a inner loop high performance library, back up and find another way. Usually this entails using a Mutex or Arc (or other alternatives based on the scenario) to provide interior mutability or multiple references. This statement might not make sense now but write it down for when it will.

I use Rust now for everything from CLIs to APIs and feel more productive in it end to end than python even.


So if you incorporate you can do whatever you want without criminal charges?


Especially since we don't make any assumption that roads are profitable. It is just insane. Cars can lose as much money as it takes but god forbid a train lose 5%.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: