I mean it in the kindest way, but scientists might be the sloppiest group I've worked with (on average, at least). They do amazing work, but they're willing to hack it together in the craziest ways sometimes. Which is great in a way. They're very resourceful and focused on the science, not necessarily the presentation or housekeeping. That's fine.
This was a big COVID-era lesson; that places like the CDC and NIH and whatnot really need a well-trained PR wing for things like Presidential press conferences, to communicate to the public.
The engineers, sure. Product team... well, we've seen the past 2-3 years that AI isn't necessarily based on quality and accuracy. They are also at the top of their game in terms of how to optimize revenue.
Their field is pretty much selling sloppiness-as-a-service, tho.
I'm genuinely a bit concerned that LLM true believers are beginning to, at some level, adopt the attitude that correctness _simply does not matter_, not only in the output that spews from their robot gods, but _in general_.
It's kinda crazy to witness, you can see in the main GPT-5 release thread that there are people excusing things like the bot being blatantly wrong about Bernoulli's Principle in regards to airplane flight. I wish I could find it again but it's thousands of comments, one of the comments is literally "It doesn't matter that it's wrong, it's still impressive". Keep in mind we're discussing a situation where a student asks the AI about how planes fly! It's literally teaching people a disproven myth!