Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Currently revising for master exams. Conversations with ChatGPT have been a game changer for enhancing my learning.


But how much of what it said was nonsense? And did you spot the nonsense or accept it?


Seems like great training for hard sciences, where spotting nonsense or mistakes is a desirable skill.

May also be useful to “bullshit” disciplines? The SOKAL affair showed that some disciplines are perhaps just people doing “GPT” in their heads: https://en.m.wikipedia.org/wiki/Sokal_affair Edit: this one is hilarious: https://www.skeptic.com/reading_room/conceptual-penis-social...


Yeah it is a mixed bag. Like others have mentioned, because it doesn't say when it's unsure of something I wouldn't trust it as my sole tutor. But for a subject you know it can help you connect the dots and consolidate learning.


The % of nonsense is constantly going down as these models get better, though. Even if what you say is a problem now, it won't be a problem for long.


That's not necessarily true. As the percentage of nonsense goes down there is a critical region where people will start to trust it implicitly without further verification. This can - and likely will - lead to serious problems which will occur downstream from where these unverified errors have been injected into the set of 'facts' that underpin decisions. As long as the percentage of nonsense is high enough an effort will be made to ensure that what comes out of the system as a whole is accurate. But once the percentage drops below a certain threshold the verification step will be seen as useless and will likely be optimized away. If the decision is a critical one then it may have serious consequences.

You see something similar with self driving vehicles, and for much the same reasons.


Does avoiding AI allow one to avoid nonsense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: