I feel like security for CLI users is generally falling behind other systems.
On Android and iOS, any application is supposedly running in its own sandbox, where it can't just randomly access files used by the rest of the system (including other applications).
The web is obviously strongly based around a security model where a website ultimately can't do much in terms of information access (eg, a properly designed website's information is protected from other websites) .. as I understand it, this was not always the case (we're not using ActiveX anymore, right?).
In the desktop space, this pattern is supposedly supported by Flatpak, and I think the macOS store (though I'm not sure about this—I haven't actually used either).
Meanwhile, when a random developer decides to run `make` or `npm install` or `mvn install` or even `vim foo.txt`, they're essentially trusting their entire computer to whoever created the content in their working directory.
Will there be some revolution at some point that encourages people to somehow isolate their commandline activities to different realms, thus limiting the scope of any attack like this?
> any application is supposedly running in its own sandbox,
But that fails for developers (which is more or less synonymous with "CLI users"). Development is all about data and fine grained tools, not apps.
You check out a file with git, then edit it with vim, and build it with gcc, which pulls in headers generated with a python script, itself having been configured with, I dunno, cmake gadgetry. Where do you draw the boundaries here?
I mean, you can draw a big circle around all your development activities. Products have been invented that do that. They're called "IDE's", and are sort of the metaphorical opposite of the command line tools we're discussing, and not really a solution to the people choosing to use this environment.
Alternatively, you can view the whole development system as a sandbox. Do your work in a separate VM or docker instance, for example. Some people actually do that (in particular folks with windows desktops who need linux tooling will recognize this), and while it's not generally considered a security technique, it certainly could be.
I see what you're saying, but I think we can do better.
I would really like some simple way to limit what build scripts can do. For example, they shouldn't be able to read and write arbitrary files in my home directory, even though they should be able to read /usr/lib and the like. They shouldn't be able to contact the Internet (or even the LAN) without explicit permission. They shouldn't be able to do anything with my display unless I permit it.
Is there some command available for Linux that lets me set up a quick sandbox so I can detect and stop trojans in build scripts? Virtual machines, chroots, containers, and Docker are all good but they don't solve this problem. SELinux has the potential to solve this but it's extremely complex. Instead of "./configure && make", I want to type something like "sandbox ./configure && sandbox make"; sandboxed build scripts should work the same as if they're outside the sandbox.
What you describe is called Mandatory Access Control, which is in contrast and supplements standard UNIX Discretionary Access Control (user/file permissions). It is implemented on Linux with AppArmor in addition to SELinux.
I would guess the biggest hurdle in using such systems is the complexity and expertise required in drafting the specific rules that should apply in your threat model.
If you craft rules for your cmdline tool you will run into problems pretty soon despite the complexity. Better is using a wrapper that is confined. See e.g. SELinux' sandbox(8) which is more or less a "multiwrapper" in that it sets the context explicitly and not implicitly by being confined.
The nix package manager comes close to what you’re describing. Each package build is sandboxed so that it can only read files from explicitly defined dependencies. It also confines writes to a subdirectory in /nix/store, assigned to each package.
Where I work, we by now have all build steps of server-side code occurring in containers, and all local dependencies for that are `docker`, `git`, and `make`.
Though docker does not completely isolate the build process, it definitely limits writable local directories to those explicitly mounted into a container.
An extra bonus is that docker also installs on Windows and on MacOS and is controlled seamlessly, so our developers are free to choose whatever OS is needed for native development.
If you're running docker with the "userns-remap" option, it is a form of sandbox insofar as any userspace isolation provided by Linux is a sandbox; you would probably need a root privilege exploit to break out of it. If there are root privelege exploits in Linux, they should obviously be fixed.
Of course, we should be aware that exploits are quite likely to exist, since much of Linux was designed without a notion of namespaces.
For me, the main issue with docker in terms of security is that it's not clear what security you're giving up when running containers (eg, various projects expect you to do `docker run -v /var/run/docker.sock:/var/run/docker.sock ...`, and things like `docker-compose` essentially specify arbitrary invocations of containers). When ways of escaping are discovered, they get fixed, since those escaping mechanisms are not really considered part of the execution model for containers.
I'm running podman(docker replacement) on fedora for many things. It can run as a user without root permissions, does not need a daemon and containers get their own SELinux policy by the fedora devs. Far from perfect, definitely better than nothing.
The discussion is a bit dated, docker now supports user namespaces such that apps can run as root inside a container, but are not root outside of the container.
> You check out a file with git, then edit it with vim, and build it with gcc, which pulls in headers generated with a python script, itself having been configured with, I dunno, cmake gadgetry. Where do you draw the boundaries here?
While the software itself is outdated, and the design is fundamentally limited by the goal of compatibility with existing Unix programs, I think it's a good proof-of-concept of how you could design a sandboxed command line, especially if you were building everything from scratch.
Basically, any filename argument you pass to a command is no longer just a string; it becomes a capability granting that command access to that file.
So:
- Check out with git: `git clone https://url foo` includes `foo`, so you're giving it permission to create a directory named `foo`. Other git commands don't take the repo root as an argument. If you're redesigning the world, perhaps they should. Alternately, git could have a custom sandbox that identifies the directory containing .git and grants access to the whole thing.
Note: In Plash you have to stick in `=>` to grant read-write access, but I'll ignore syntax details since I'm thinking more in terms of broad strokes of a hypothetical design.
- Edit with Vim: `vim foo.c` gives permission to `foo.c`. (Vim also supports opening new files within the editor; in theory that could be redirected to some sort of powerbox, though it's harder than with GUIs.)
- Configure with cmake: `cmake .` would grant permission to read/write the current directory.
- Run make: You'd want syntax like `make .` rather than implying the current directory. (Again, Plash has a solution for this but I don't care about the specifics.)
- Generate headers with a Python script: make would invoke the script, passing it the input/output paths as capabilities.
- Build with GCC: One of the Plash examples is `gcc -c foo.c => -o foo.o`.
> - Edit with Vim: `vim foo.c` gives permission to `foo.c`. (Vim also supports opening new files within the editor; in theory that could be redirected to some sort of powerbox, though it's harder than with GUIs.)
Will Vim be allowed to read `~/.vimrc`?
If not, how should I configure Vim system-wide?
(If the answer is "with a daemon", then: is Vim allowed to connect to the daemon and read its configuration?)
If yes, why?
(If the answer is "because you configured the system so", then: how is this different from SELinux?)
Just kidding. :)
The thing is that access to shared state is hard. Any access is a potential security threat.
Finding a balance between functionality and security is one of those areas that I fear are in the NP-hard-equivalent area of system design.
Your IDE comment made me think of this. IDEs store each project completely self contained in its own directory. How about allowing some directories to be marked as development dirs. A blacklist of commands when run inside that directory will be prohibited from accessing anything outside that directory, except for read-only access to /usr etc. and RW access to whitelisted dirs.
For example: I mark ~/school/operating-systems as a development directory. I place git, vim, wget, curl, and make on the blacklist (potentially globally-configured). Additionally I whitelist ~/.config/git for git and ~/.vim for vim. This allows core commands that we consider 'safe' like cd, mv to be run without restrictions - presumably they are only being used manually or from sandboxes tools like Make. But when I execute git from anywhere inside ~/school/operating-systems, it will only be able to R/W inside that directory and within the ~/.config/git directories. Meanwhile read access to system folders lets Make etc. work properly.
With a single global configuration + directory-specific overrides, it remains pretty straightforward and doesn't require a huge time investment like AppArmor/SeLinux do. Meanwhile all non-dev work remains unaffected. If you need to download and build a random program, just mark it as a dev directory and you're pretty safe. It's not perfect, but better than nothing
Well, you can do it right now with firejail. Start with the default config, it prevents access to many sensitive files; then either add more sensitive locations to blacklist, or blacklist your whole home directory and whitelist whatever is necessary.
I don't recall the exact configuration format, but it's something really simple, like
Is this problem not solved by virtual environments or chrooting ? Potentially a simple wrapper script could be created around chroot that only installs the programs you need.
Chrooting loses global configuration files and libraries and tools that you have installed. This has a similar effect, is not as strong, but is simpler.
> You check out a file with git, then edit it with vim, and build it with gcc, which pulls in headers generated with a python script, itself having been configured with, I dunno, cmake gadgetry. Where do you draw the boundaries here?
None of this should not require access to things like my web browser, my ~/.config directory, some directory like ~/build/go containing an unrelated project.
> They're called "IDE's"
IDEs don't exist for security reasons, but I think you can at least use the existence of these things as evidence that there is a sensible way of scoping development activities. I imagine it wouldn't be a major hindrance if IDEs did provide this sandbox functionality, where you could still launch a shell within the IDE, and that shell would be in a mount/pid namespace controlled by the IDE.
Personally, I want something that applies not just to development, but general computer use. If you go back 15 years, it was probably fairly typical for someone to expect to download an run an "EXE" file. Now the expectation is that a website or phone app is used instead, which has fewer security implications.
Surely there must be some path for moving to something more secure when it comes to commandline usage.
> None of this should not require access to things like my web browser, my ~/.config directory
It does! I want to configure git and vim globally, I want to start a development server and use my browser to access it. I want to netcat a file because IT policies restrict ssh...
> I mean, you can draw a big circle around all your development activities. Products have been invented that do that. They're called "IDE's", and are sort of the metaphorical opposite of the command line tools we're discussing, and not really a solution to the people choosing to use this environment.
Maybe? I don't really think it's an opposite thing anymore. Most of my projects are written/browsed/debugged in an IDE but not built/configured/etc. in the IDE, and the testing, git, docker, code review, etc. work just happen in a console instance or web browser next to the IDE.
This is what AppArmor and SELinux are used for. AppArmor has the ability to apply a profile whenever a particular binary is run, and so you can confine the process. SELinux has labels and all that jazz so you could probably configure it in a similar manner (though I think it doesn't support restricting all callers of a binary).
To a lesser degree, seccomp is similar but you don't have an easy way of applying a profile to a binary (instead the binary would need to self-apply the profile or you'd need to come up with a separate way of running programs so that a profile would be auto-applied).
The problem is that users want to be able to edit any file on the filesystem with their text editor -- if you don't allow vim to edit files in /etc/ then admins won't be able to do their job (though, please use "sudo -e" instead of "sudo vi").
though I think [selinux] doesn't support restricting all callers of a binary
It can, in two ways: the current context (say, user_dev_t) needs execute permissions on the declared type of the executable (say, git_exec_t), and additionally, you can declare a type transition to be performed per calling context.
To keep with the git example (I admit it's a bit contrived), you could label your .git directory as git_repo_t and deny access for both user_t (normal user context) and user_dev_t (user dev context). Then, you can define a type transition as above (user_dev_t -> git_exec_t => git_t), and allow only the context git_t access to git_repo_t files. With this setup in place:
- git, as called from within user_dev_t works as normally
- git, as called from user_t, has no defined type transition so will try to access git_repo_t under the user_t context, which will fail.
I don't think SELinux would help much here, since users usually run as unconfined_t. I'd guess typical users also run `sudo <whatever>` without bothering to switch role/type.
Personally I don't user mountains of binary CLI software from random people I've never heard of and don't particularly want to deal with a sandbox. Furthermore, if I really really wanted one I know how to set one up.
Usually I check over things like make files and am immediately concerned if I'm told to run a "build.sh" (especially as root.)
On android the security is kind of an illusion anyway, it's pretty common for apps to either find ways to escape it or trick the user into granting permissions that the apps abuse. The worst offenders are often preinstalled by the hardware vendor anyway and since you don't usually control the OS are impossible to remove.
> it's pretty common for apps to either find ways to escape it
These sound like security bugs, which should obviously be addressed, but "there are bugs" shouldn't be a reason to dismiss the entire goal of isolation-based security. When it comes to the browser, there are obviously occasional issues to do with colouring of visited links, sandbox escapes, information exposure due to speculative CPU execution. Overall though, what's important is that there is a security model and that the implementation issues are addressed.
> Will there be some revolution at some point that encourages people to somehow isolate their commandline activities to different realms, thus limiting the scope of any attack like this?
I hope we can at least put it off until it's a real problem and not turn the world upside down for a largely theoretical one. I also doubt any sort of sandboxing will be effective and practical to run software from untrusted sources.
Sure, you can create containers or VMs for compilation. It’s just that developers and vim users are a minority so the attacks and consequently the defenses have concentrated on the stuff the majority uses: Microsoft Office, general acquisition and installation of software packages, browsers.
What's the gaping security hole there? My interpretation of the documentation is that -e PROG execs PROG with stdin/stdout set to the accepted socket. (I mean, it's possible to write a program that, if it receives any input on stdin, would rm -rf everything. But the error seems with the combination of the program and -e, and not -e? That is, -e isn't inherently dangerous? Or do we just not trust ourselves that much?)
-DGAPING_SECURITY_HOLE is how you have to compile nc in order to enable "-e" support. The gaping security hole is that it is literally RCE-as-a-feature -- yes, it's not as bad as "pass any text you get over this socket to a shell session" but it's still pretty bad.
Another use case was specifying the indentation scheme for the file for weird conventions like "2 space indentation with 4-column tabs" (first level is two spaces, second level is one tab, third level is one tab plus two spaces...)
Fortunately now everything's UTF-8 and weird indentation schemes are uncommon.
I use them in Emacs occasionally. It's handy for stuff with ambiguous names. Makefiles (which dialect?), whatever.pl (Perl? Prolog?), whatever.asm/whatever.s (which architecture? Which assembler?), etc.
Also comes in handy for working with files that have an incorrect extension, for whatever reason, or different formatting from all the other files of that type that you work with. (Per-project formatting stuff can often by solved with .dir-locals.el, but not always.)
Modelines are pretty much only for single-use scripts or non important files but are still very useful (the security implications are concerning though).
Usually just to `set ft=zsh' to set a filetype on a file with no (or incorrect) file extensions. Which syntax highlighting and other plugins rely on to activate.
For example, I use them on a TODO.md file to set it as a `task` filetype which is a markdown-friendly todo list: https://github.com/irrationalistic/vim-tasks whereas in Github I want it to render as markdown only so I keep the file extension `.md`.
Another usecase is system/software user config files which have all sorts of file extensions (ie, `.rc` or '.conf') but use a variety of internal formatting (ie, json). These files never leave your personal machines or arrive that way, in my experience at least.
I'd imagine they'd also stick out like a sore-thumb at the bottom of most files if they were maliciously inserted but I'm sure there's some exceptions (targets who don't know what Vim modelines are).
I don't see how that helps either of my examples (TODO.md, config files).
Even when it's actual part of ZSH script I'm only using `ft=zsh` when it's a file I'm sourcing, not intended as a runnable script by itself. For example: `~/.exports` which includes my PATH and env variables, as I also import it into `.bashrc` in addition to `.zshrc`.
Being fairly unobtrusive, I find modelines convenient for making explicit the proper indenting mode for source files. This is useful in distributed development, where the project doesn't specify a preferred code style. Even if other developers don't use vim, they should be able to understand how they are expected to indent their PR.
Moreover, they're super handy for loading the proper syntax highlighting when working with template engines (e.g. Jinja2) to produce anything other than HTML.
An interpreter directive is far more common than a piece of configuration specific to an editor. It also serves a practical purpose on *nix style systems whereas editor configuration is a matter of preference.
Normal use case, yes, this is only stuff I work on. When collaborating you must assume heterogeneous development environments which make this kind of thing useless.
They're really useful for specifying indentation settings, or overriding the filetype if the extension is auto-detected as a different filetype than desired.
This should be put into perspective: The default vimrc configuration file disables the modelines option and contains a warning that it is a security risk.
I.e. this likely only affects a small number of users and the devs have done as much as they can about a risky feature: Off by default and whoever changes the option should know about the risks.
Of course it's still a valid vulnerability and it's good that it's been found and fixed. But it's likely not a big deal.
I remember reading ~10 years ago that mode-lines were considered to be insecure (you are parsing arbitrary data as configuration options) and have always had them disabled in my .vimrc. I'm surprised they haven't been disabled by default (even in neovim).
EDIT: I probably was reading about CVE-2007-2438, another mode-line based RCE attack.
I wonder if it is because neovim's testing is setup differently and they wanted to port the patch immediately rather than spend time which they might not have had at the moment to rewrite the test. I suppose the patch was tested upstream, though you would want the test in neovim to prevent a future regression.
Well crap, now I have yet another reason to lose sleep tonight. I always thought Vim and text-files were safe.
I do find something endlessly interesting in these kinds of vulnerabilities, for no other reason than the fact that I can appreciate the hunt for them. I wouldn't really think to look at Vim as an attack vector, and even if I did, I wouldn't even know where to start.
Given that most administrators use vim to edit everything in the system and often open random files, I would think vim is the most obvious thing to attack (just like printers in corporate networks).
Just do "set nomodeline" and move on. I've had it disabled for the past ~10 years because of previous CVEs like this (such as CVE-2007-2438).
Very interesting exploit. Makes me feel good that I disable modelines because my company always sets columns=96, however I usually open multiple files with :vsp and I have to manually change the columns to what my terminal actually is.
I would be interested to hear the historical reason for having such generic modeline execution in the first place. It seems a little out of place in text file editing.
They don't "enforce" it, but all the code written by the two largest contributors always has it. Its a startup, so those two contributors wrote the majority of the codebase.
I've never seen anyone do this. There are so many better solutions to this problem and it's a heavy-handed/messy solution that is difficult to change/maintain.
I'd push back on this hard if I was told to do it.
I'm not told to do it, but I don't write hardly any of the code in the repo. So when I have to go inspect one of their source files, which I do many times per day, then it gets me. Usually if I'm on some VM that doesn't have my own .vimrc in it. It is what it is, just a minor annoyance and a few key strokes to fix (":set columns=xxx", then "C-w =").
I understand the security concerns (arbitrary code execution just by loading an arbitrary file into vim).
My question was: this issue was caught, diagnosed, widely publicised and the configuration fix for it was widely deployed - nearly 15 (fifteen!) years ago.
So why is it cropping up as a "new" security issue now?
I'd like to note here that while the vim versions of OS distros get updated every now and then, Bram does a fantastic job of releasing multiple times per week and vim is easy to compile.
I compile it every week (I use it as my IDE -- term buffers and all).
Just in case someone here is a heavy user and is worried about this.
The default behavior on Emacs is to warn you that file-local variables can be unsafe and prompt you to execute them. However I developed the habit of mostly ignoring it since they're usually in my own files.
The scary part for me (albeit that I'm an emacs user not really a vim user) here is that the modeline string is hidden from the victim in their vim window, so not only have they enabled the RCE they aren't aware of it. I'm not sure if emacs file-local variables can be exploited in the same way (they probably can but I'm just unaware of it)
Before I go completely grey here, I'll clarify as a decades long user of vim/neovim: no software should be so near and dear to your heart that you get complacent about the risks it poses, most especially security risks.
Heartbreak at this announcement means you were assuming it wasn't a vector for security weaknesses.
On Android and iOS, any application is supposedly running in its own sandbox, where it can't just randomly access files used by the rest of the system (including other applications).
The web is obviously strongly based around a security model where a website ultimately can't do much in terms of information access (eg, a properly designed website's information is protected from other websites) .. as I understand it, this was not always the case (we're not using ActiveX anymore, right?).
In the desktop space, this pattern is supposedly supported by Flatpak, and I think the macOS store (though I'm not sure about this—I haven't actually used either).
Meanwhile, when a random developer decides to run `make` or `npm install` or `mvn install` or even `vim foo.txt`, they're essentially trusting their entire computer to whoever created the content in their working directory.
Will there be some revolution at some point that encourages people to somehow isolate their commandline activities to different realms, thus limiting the scope of any attack like this?