Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In general LLMs have made many areas worse. Now you see people writing content using LLMs without understanding the content itself, it becomes really annoying especially if you don't know this and ask the question "did you perhaps write this using LLM" and get the "yes" answer.

In programming circles it's also annoying when you try to help and you get fed garbage outputted by LLMs.

I belive models for generating visuals (image, video sound generation) is much more interesting as it's area where errors do not matter as much. Though the ethicality of how these models have been trained is another matter.



The equivalent trope of this as recent as 5 years back would have been the lazy junior engineer copying code from Stackoverflow without fully grokking it.

I feel humans should be held to account for the work they produce irrespective of the tools they used to produce it.

The junior engineer who copied code he didn't understand from Stackoverflow should face the consequences as much as the engineer who used LLM generated code without understanding it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: