Interesting approach! I work on the itch.io app (functionality overlaps the Steam client somewhat, but with a different content offering / different way of running things) and we do address both concerns:
* app isn't tied to / doesn't assume a Debian-ish distribution (we ship .deb, .rpm, a PKGBUILD, and a simple binary .tar.xz)
* app uses firejail on Linux (sandbox-exec on macOS, different user on Windows) to "set up more fences around" games you download from the internet.
There's a bunch more features we want to add to the app (live video capture, see itchio/capsule on github, synced collections, etc.) — but isolating "downloaded apps" from the rest of the system seemed like a sensible prerequisite on the road to doing that.
I don't want to spam links, but if you're interested in our approach, you can probably search "itch.io sandbox" with your favorite search engine and stumble upon it :)
If I'm not mistaken, containers come with their own userland, if you want a useful graphical container you're in for a few hundred megabytes of dependencies, whereas sandboxing approaches (firejail, projectatomic/bubblewrap - used by flatpak for example) just try and limit what a process in the same user space has access to.
I wanted a solution that was low-overhead enough that it was a no-brainer for users to turn it on. However, it's not perfect: our sandbox policy could use tightening (as long as it doesn't break too much stuff), and having an additional SUID binary around is definitely something to look out for.
I'm hoping that more interest gathers around sandboxes and that they become more mainstream in Linux ecosystems. "Trusting package maintainers" only goes so far, and doesn't really account for third-parties shipping binary packages!
Interesting. How do you handle dependencies? That's one issue I've had with Steam, libraries on my machine do not match the ones the app presumes. Shipping binaries in different packages does not really solve that part of "assume debianish distro" problem.
The important part is to make sure that the ABI didn't change. Since Steam wants Debian-based distro, it then should be enough to keep the same. It's not a perfect way to go, but so far it works though. :-)
I think it should be pretty obvious why people put things into containers :-)
Few main points though, which pushed me making this Docker container:
1. I want to set-up more fences when running the code I don't/can't trust;
2. I don't want to spend time on figuring out how to install Steam (what deps) in a non-Debian (or non-SteamOS) based distro;
3. I like cleanliness: I can erase Steam and all its dependencies in a matter of seconds;
4. Like you said, it was an interesting exercise and it still needs some polishing :-)
And few Pros from my PoV:
- I can have Steam on my Ubuntu/openSUSE/[put any other distro I will want to use] in a short time that Docker takes when downloads this Steam container;
- Since Steam is meant to run in Debian (SteamOS) based distro, it is not a problem anymore, since it is in a container now.
There is very little reason to believe that escalated privileges are not possible within a given virtualized environment, at least with current technologies.
Containers are great for development and production on your own infrastructure, or shared infrastructure like GCE or AWS. Security can be had from doing inspected builds, self signing, etc.
For consumers, however, it's a completely different ballgame.
By the time serious hacking is an issue through Steam, I'd expect containers will be that much better anyhow. For all they can be criticized, and regardless of whether you think some other approach would have been better, they're getting the "trial by fire" treatment. By hook or by crook, in another year or two I expect they'll be as secure as you could ask for.
I'm primarily concerned with escalated privileges in what will become standardized gear for VR and AR viewing...if we start down the path with containerization like I expect we are.
Worn most of the time while a typical user is awake, goggle hacking will represent a very large target with widely varying rewards.
All those Docker commands usually run as root or something equivalent to root. So a container breakout could lead to root on the host system.
I think kordless is claiming that using Docker here could increase the severity of an attack; otherwise it doesn't seem like putting up another barrier could hurt security, even if it is later broken.
My claim would be applied to all virtualized environments, including containers and VMs - not just Docker. Microkernels have a decent shot at keeping the security issue at bay, but even then it can't keep them out forever.
Everything falls to hacking eventually. That's the nature of it, at least till now.
I would note that Docker is primarily a tool for developers and operations folk who are also the author of the software being run. Docker itself is not the risk here, but using it for some use cases may very well be.
Isolated or separated (from the host system). Both words will work when context is clear. Or it can confuse, when context was not specified, which is in my case.
What I actually meant is that I'd rather run an isolated (separated) process from other processes and the file-system space (and the other isolation features which cgroups are giving us), not meaning isolation from the host system.
With the cgroups/namespaces it's a process isolation (or separation, whatever wording you prefer). In the Linux kernel documentation they also use isolation wording. ;-)
There's a Steam overlay in Gentoo, and it mostly works, but lately I've had a lot of issues with ATI's open source drivers and Steam, to the point where I just setup a windows machine for games.
This looks really promising though, and shows a practical use case for docker.
I made a separate user for using Steam (and other games), and it involved a little bit of routing when it comes to X11 and PulseAudio. My reason for doing so was primarily because of how games create many dotfiles, and I wanted my home folder clean.
No, this is the CUDA toolkit, it doesn't depend on the driver version. You can compile CUDA code without having a GPU (which is the case during a "docker build").
Edit: in other words, your Docker image doesn't depend on a specific driver version and can be ran on any machine with sufficient drivers. Driver files are mounted as a volume when starting the container.
I'm running steam in a systemd container, the main issues were sound and notifications: I don't understand how pulseaudio works, so I had to give share some system directories with the guest system to get the sound working. Notifications were solved by sharing dbus. GPU was shared by sharing a single directory in /dev with correct persmissions
At a guess it's to sandbox the Steam ecosystem? There was a Steam bug in the past where it would delete all contents of the user's home folder. I guess also Steam and its games are all closed source and people might not trust them as much.
I'm interested because linux distros not named "Ubuntu" have varying difficulties running steam. The distributions with smaller communities often will figure out how to do it, and then shortly a steam update breaks it. Or 32-bit games work but not 64, or vice-versa. Or ...
Yeah, one of the points which pushed me creating this image!
I have tested HL engine based games and CS:GO which has its csgo_linux64 binary, causing lots of people to complain they could not run it, since Steam itself is 32-bit one.
With this image it is not a problem, since I am preloading all the necessary libraries. It, though, still needs some polishing... @all: I appreciate your advice / PR !
I don't know much about Docker, but I know Steam for Linux is built for Ubuntu. Maybe Docker helps for distros like Fedora? The benefit is probably highest in the case of NixOS, because Steam changes stuff behind the scenes and so isn't really compatible with declarative package managers.
NixOS supports Steam wonderfully - it creates a half-container (alternate filesystem root, shared networking) which looks like what Steam expects, has full access to graphics drivers and your home directory, and can be executed like any other app (run "steam"). The full Docker infrastructure isn't really necessary.
@tormeh look at a Docker just like at a system that gets you the images (in "tar" archives) from the Docker Hub repo (or build them by yourself) and lets you run these "tar" images by leveraging the cgroups (Linux kernel abstraction feature that limits, isolates resource usage of a process). You may basically look at a Docker like at "chroot-on-steroids".
And yes, it helps to run the images wherever you have Docker installed. There are few limitations though, especially when it comes to the point one needs to pass something from the host into a container (e.g. driver libraries, devices, special system paths or environment variables), since they may differ from distro to distro, from Platform to Platform.
holy crap; that's cool! I had no idea it existed (maybe I came across it when it was still called xdg-apps... I definitely agree with the decision to change the name).
Personally I'd love a Steam flatpak, but since the current images are based on Fedora AFAICT (no surprise there, GNOME/FreeDesktop project) it would take a lot more work since you'd have ta create a Debian/Ubuntu SDK to base it from.
I use this on an otherwise free-software-only system (Parabola/Trisquel), since the only proprietary software I ever use is games, so I try as hard to isolate proprietary code as much as possible (without performance loss, which happens with VMs). This sort of goes with point "1. I want to set-up more fences when running the code I don't/can't trust;"
They still don't, since they ask their Linux users to install tons of libraries on their own. Not that they really care about Linux anyway... (still not GOG Galaxy client...)
Libraries are OK to install. You can do the same in the Docker container. What Docker does is better isolation from the rest of the system. You can do it yourself with cgroups / lxc, but Docker gives higher level management.
I bet there are dozens or maybe even hundreds of people who run CS in a container. What, I believe, makes the difference is that when one shares with the reproducible results. :-)
How does persistent state work with this? For example, what happens to my saved games? What if I update the image (for example to pick up a Steam update)? What will happen to those saved games?
Currently it is stored in a docker volume. (the data available at /var/lib/docker/volumes path)
But changing just one line in a docker-compose.yml file you can mount it wherever you want on your host. See "data:/home" in there.
You can write there "/home/your-user/mysteamdata:/home", so that all Steam games (caches, saves, ...) will be available for you at your home directory in "mysteamdata" directory ;-)
If you don't want to do something smarter, by nature of the file system that Steam rests on, there must necessarily exist some folder s.t. all save locations exist in a subfolder (recursive) of this folder. You'll use extra space, but you can definitely do that easily.
Alternatively, you can keep an environment variable GAME_SAVE_MOUNTS to which you append a -v flag with the desired mount saves. You can that start your container with $GAME_SAVE_MOUNTS and have everything work out of the box.
It's been already discussed in this thread. The performance impact should be negligible. I am having ~110-150 FPS with my nVidia 560 Ti in CS:GO , 1920x1080.
The precise testing haven't been done explicitly to measure increase/decrease. But feel free to test it. ;)
It's absolutely irrelevant since Docker (cgroups) are just the Linux kernel abstraction which helps to isolate resources of the processes. And with this image the general idea is that it comes like a package, with "just take & run" approach, eliminating the need to depend on the specific Debian-based Linux distro (which is required by Steam and provided with this Docker image).
Then as discussed in this thread, it gives few security advantages, control benefits since those isolated resources are controllable.
I don't want to spam links, but if you're interested in our approach, you can probably search "itch.io sandbox" with your favorite search engine and stumble upon it :)