Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> then the problem is not an LLM abusing it but anyone abusing it

I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.



With great power comes great responsibility? Today there's nothing stopping grandmas from driving, so whatever could go wrong is already going wrong


It’s a problem of scale. If grandma could autonomously pilot a fleet of 500 cars we might be worried. Same thing if Joe Shmoe can spin up hundreds of instances of stock trading bots.


You're better off placing your bet on Russian and Chinese hackers, crypto scammers than a Joe Shmoe. But read https://aisnakeoil.substack.com/p/the-llama-is-out-of-the-ba... - there's no noticeable rise in misinformation


You don't understand the alignment problem.


Oh I'm aware of it. I do not think it holds any merit right now when we're talking about coding assistants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: