Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we want to be pedantic about language, they aren't bullshitting. Bulshitting implies an intent to deceive, whereas LLMs are simply trying their best to predict text. Nobody gains anything from using terms closely related to human agency and intentions.


Plenty of human bullshitters have no intent to deceive. They just state conjecture with confidence.


The authors of this website have published one of the famous books on the topic[0] (along with a course), and their definition is as follows:

"Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence."

It does not imply an intent to deceive, just disregard for whether the BS is truth or not. In this case, I see how the definition can apply to LLMs in the sense that they are just doing their best to predict the most likely response.

If you provided them with training data where the majority inputs agree on a common misconception, they will output similar content as well.

[0]: https://www.callingbullshit.org/


The authors have a specific definition of bullshit that they contrast with lying. In their definition, lying involves intent to deceive; bullshitting involves not caring if you’re deceiving.

Lesson 2, The Nature of Bullshit: “BULLSHIT involves language or other forms of communication intended to appear authoritative or persuasive without regard to its actual truth or logical consistency.”


> implies an intent to deceive

Not necessarily, see H.G Frankfurt "On Bullshit"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: