Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eventually AI will also be able to reliably audit papers and report on fraud.

There may be newer AI methods of fraud, but it will only buy you time. As both progress, committing to record a fraud generated by technology will almost certainly be detected by a later technology.

I would guess that we're within 10 years of being able to automatically audit the majority of papers currently published. That thought must give the authors of fraudulent papers the heebee jeebies.



The problem is that detecting fraud is fundamentally harder than generating plausible fraud. This is because ultimately a very good fraud producer can simply produce output that is identically distributed to non-fraud.

For the same reason, tools that try to detect AI-generated text are ultimately going to lose the arm's race.


It's not a race though. Once the fraud is committed to record it can no longer advance in sophistication. Mechanisms for detection will continue to advance.


I think the argument is that if you produce your fraud from an appropriate probability distribution, any "detection" method other than independently verifying the results is snake oil.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: