Intent does matter if you want to classify things as lies.
If someone told you it's Thursday when it's really Wednesday, we would not necessary say they lied. We would say they were mistaken, if the intent was to tell you the correct day of the week. If they intended to mislead you, then we would say they lied.
So intent does matter. AI isn't lying, it intends to provide you with accurate information.
The AI doesn't intend anything. It produces, without intent, something that would be called lies if it came from a human. It produces the industrial-scale mass-produced equivalent of lies – it's effectively an automated lying machine.
Maybe we should call the output "synthetic lies" to distinguish it it from the natural lies produced by humans?
> statements produced without particular concern for truth, clarity, or meaning, distinguishing "bullshit" from a deliberate, manipulative lie intended to subvert the truth
It's a perfect fit for how LLMs treat "truth": they don't know so that can't care.
So you're saying deliberate deception, mistaken statements and negligent falsehoods should all be considered the same thing, regardless?
Personally, I'd be scared if LLMs were proven to be deliberately deceptive, but I think they currently fall in the two later camps, if we're doing human analogies.