Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.


It's only expensive if you throw all data directly at the largest models that you have. But the usual way to apply LMs to such large amounts of data is by staggering them: you have very small & fast classifiers operating first to weed out anything vaguely suspicious (and you train them to be aggressive - false positives are okay, false negatives are not). Things that get through get reviewed by a more advanced model. Repeat the loop as many times as needed for best throughput.

No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.

Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...


LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.


> if you're talking about sifting through billions of messages it gets too expensive very fast.

Who's paying for that tho ? The same dumbass who get spied over, i don't see it as a reason why it wouldn't happen. Cash is unlimited.


But now instead of a human going "yes yes after a few hours of work i have chosen the target" they can go "we did more processing on who to best blow away, and it chose 100 more names than any human ever could! efficiency!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: