Hacker News new | past | comments | ask | show | jobs | submit login
Steam in Docker (docker.com)
194 points by arno1 on Aug 7, 2016 | hide | past | favorite | 76 comments



Interesting approach! I work on the itch.io app (functionality overlaps the Steam client somewhat, but with a different content offering / different way of running things) and we do address both concerns:

  * app isn't tied to / doesn't assume a Debian-ish distribution (we ship .deb, .rpm, a PKGBUILD, and a simple binary .tar.xz)
  * app uses firejail on Linux (sandbox-exec on macOS, different user on Windows) to "set up more fences around" games you download from the internet.
There's a bunch more features we want to add to the app (live video capture, see itchio/capsule on github, synced collections, etc.) — but isolating "downloaded apps" from the rest of the system seemed like a sensible prerequisite on the road to doing that.

I don't want to spam links, but if you're interested in our approach, you can probably search "itch.io sandbox" with your favorite search engine and stumble upon it :)


Thanks @fasterthanlime !

Is there something that the firejail does better than the Docker?

I can see the firejail also uses the namespaces and seccomp-bpf.


I suspect that this question was covered in the HN entry for firejail earlier today: https://news.ycombinator.com/item?id=12239840 - but in our case, it's just that it's lighter.

If I'm not mistaken, containers come with their own userland, if you want a useful graphical container you're in for a few hundred megabytes of dependencies, whereas sandboxing approaches (firejail, projectatomic/bubblewrap - used by flatpak for example) just try and limit what a process in the same user space has access to.

I wanted a solution that was low-overhead enough that it was a no-brainer for users to turn it on. However, it's not perfect: our sandbox policy could use tightening (as long as it doesn't break too much stuff), and having an additional SUID binary around is definitely something to look out for.

I'm hoping that more interest gathers around sandboxes and that they become more mainstream in Linux ecosystems. "Trusting package maintainers" only goes so far, and doesn't really account for third-parties shipping binary packages!


Interesting. How do you handle dependencies? That's one issue I've had with Steam, libraries on my machine do not match the ones the app presumes. Shipping binaries in different packages does not really solve that part of "assume debianish distro" problem.


The important part is to make sure that the ABI didn't change. Since Steam wants Debian-based distro, it then should be enough to keep the same. It's not a perfect way to go, but so far it works though. :-)


Does it also use AppContainer on Windows ?


Is this meant to make uninstalling Steam easier than it is now?

Or is this an exercise in getting GUI applications with slightly exotic features (GPU access) to run?

I'd like to understand why this was made but it isn't described in the usage instructions.


I think it should be pretty obvious why people put things into containers :-)

Few main points though, which pushed me making this Docker container:

1. I want to set-up more fences when running the code I don't/can't trust;

2. I don't want to spend time on figuring out how to install Steam (what deps) in a non-Debian (or non-SteamOS) based distro;

3. I like cleanliness: I can erase Steam and all its dependencies in a matter of seconds;

4. Like you said, it was an interesting exercise and it still needs some polishing :-)

And few Pros from my PoV:

- I can have Steam on my Ubuntu/openSUSE/[put any other distro I will want to use] in a short time that Docker takes when downloads this Steam container;

- Since Steam is meant to run in Debian (SteamOS) based distro, it is not a problem anymore, since it is in a container now.


So you don't trust application code, but you trust image makers code?.


Yes? (Not the OP)

The image build instructions are pretty easy to audit. And I trust docker (to an extent).

So in my mind it's safer, but not absolutely safe. I just view it as another layer of security :)


This thread is not about to what extent I trust things.

But security-wise, running it in a container is better, than running it without isolation. IMHO.

And of course, no one asks getting this image built by the 3rd party, since the Dockerfile is open, just build it yourself ;-)


There is very little reason to believe that escalated privileges are not possible within a given virtualized environment, at least with current technologies.

Containers are great for development and production on your own infrastructure, or shared infrastructure like GCE or AWS. Security can be had from doing inspected builds, self signing, etc.

For consumers, however, it's a completely different ballgame.


I'd be more worried about the game doing something accidentally bad to my system than deliberate hacking: https://github.com/valvesoftware/steam-for-linux/issues/3671

By the time serious hacking is an issue through Steam, I'd expect containers will be that much better anyhow. For all they can be criticized, and regardless of whether you think some other approach would have been better, they're getting the "trial by fire" treatment. By hook or by crook, in another year or two I expect they'll be as secure as you could ask for.


I'm primarily concerned with escalated privileges in what will become standardized gear for VR and AR viewing...if we start down the path with containerization like I expect we are.

Worn most of the time while a typical user is awake, goggle hacking will represent a very large target with widely varying rewards.


All those Docker commands usually run as root or something equivalent to root. So a container breakout could lead to root on the host system.

I think kordless is claiming that using Docker here could increase the severity of an attack; otherwise it doesn't seem like putting up another barrier could hurt security, even if it is later broken.


My claim would be applied to all virtualized environments, including containers and VMs - not just Docker. Microkernels have a decent shot at keeping the security issue at bay, but even then it can't keep them out forever.

Everything falls to hacking eventually. That's the nature of it, at least till now.

I would note that Docker is primarily a tool for developers and operations folk who are also the author of the software being run. Docker itself is not the risk here, but using it for some use cases may very well be.


I would say, using a Docker would move the vector of attack to a Docker engine and a Linux kernel implementation of cgroups, naming spaces.

But it would still help in preventing such bugs as https://github.com/valvesoftware/steam-for-linux/issues/3671

HN discussion: https://news.ycombinator.com/item?id=8896186


Containers are not isolated from host system. They are separated. Isolated containers will have performance and features similar to User Mode Linux.


Isolated or separated (from the host system). Both words will work when context is clear. Or it can confuse, when context was not specified, which is in my case.

What I actually meant is that I'd rather run an isolated (separated) process from other processes and the file-system space (and the other isolation features which cgroups are giving us), not meaning isolation from the host system.

With the cgroups/namespaces it's a process isolation (or separation, whatever wording you prefer). In the Linux kernel documentation they also use isolation wording. ;-)


I think the parent author meant they trust the explicit and narrow boundaries the application code is permitted to run in.


Fundamentally, it is fewer things that I _have_ to trust. The shim which defines an image should be much easier to grok than the entire application.


He can always check the image specification, no?


There's a Steam overlay in Gentoo, and it mostly works, but lately I've had a lot of issues with ATI's open source drivers and Steam, to the point where I just setup a windows machine for games.

This looks really promising though, and shows a practical use case for docker.


I made a separate user for using Steam (and other games), and it involved a little bit of routing when it comes to X11 and PulseAudio. My reason for doing so was primarily because of how games create many dotfiles, and I wanted my home folder clean.


Do you mind sharing how you did this in more details ? I am also not happy with the way Steam games put files in random locations...


For PulseAudio: I needed to load a few modules that opened a TCP socket for myself between users (under TCP support with anonymous clients):

https://wiki.archlinux.org/index.php/PulseAudio/Examples#Pul...

Forgot exactly what I did with X11, but one part of it was having an environment variable that used my main users XAuthority

E.G. [gamer@mycomputer]$ echo $XAUTHORITY

/home/notthemessiah/.Xauthority


I did something similar by installing Steam to a chroot environment.

"Those who do not understand UNIX are condemned to reinvent it, poorly." -- Henry Spencer, programmer

This goes doubly for "containers" not understanding chroot/jails.


Well, there is a distinct difference between traditional chroot and the Linux control groups and namespaces.

If only chroot was enough, noone would be investing their time to the cgroups, namespaces, LXC, Docker, etc... :-)


Are you suggesting that containers provide about what chroot provides?

Containers provide chroot-like behavior for more than just the filesystem. Processes, user IDs, network interfaces... Chroot is filesystem only.


This is hardcoding driver versions: https://github.com/arno01/steam/blob/master/docker-compose.y...

Is there a better way?


I haven't come up with a better idea obviously. :-) Suggestions/PR's are greatly welcomed!


At NVIDIA we maintain this utility: https://github.com/NVIDIA/nvidia-docker

It automatically discovers the devices and the right driver files on the host.

The main goal is compute (CUDA), but we also demonstrated how to run TF2 on Steam OS during our DockerCon 16 OpenForum presentation.

Nice job! :)


I'm really not up on how this all works, but isn't https://github.com/NVIDIA/nvidia-docker/blob/master/ubuntu-1... hardcoding driver versions in a different way?


No, this is the CUDA toolkit, it doesn't depend on the driver version. You can compile CUDA code without having a GPU (which is the case during a "docker build").

Edit: in other words, your Docker image doesn't depend on a specific driver version and can be ran on any machine with sufficient drivers. Driver files are mounted as a volume when starting the container.


I'm running steam in a systemd container, the main issues were sound and notifications: I don't understand how pulseaudio works, so I had to give share some system directories with the guest system to get the sound working. Notifications were solved by sharing dbus. GPU was shared by sharing a single directory in /dev with correct persmissions


@ingenter please refer to the docker-compose.yml file of the source repository.

To make pulseaudio work in a container, you basically want to pass these volumes from the host to a container:

- /etc/localtime:/etc/localtime:ro

- /etc/machine-id:/etc/machine-id:ro

- $XDG_RUNTIME_DIR/pulse:/run/user/1000/pulse

And then, this environment variable:

PULSE_SERVER=unix:$XDG_RUNTIME_DIR/pulse/native


Why would I want to do this?


At a guess it's to sandbox the Steam ecosystem? There was a Steam bug in the past where it would delete all contents of the user's home folder. I guess also Steam and its games are all closed source and people might not trust them as much.


Bug: https://github.com/valvesoftware/steam-for-linux/issues/3671

HN discussion: https://news.ycombinator.com/item?id=8896186

I regularly reference the # scary! comment at work.


I'm interested because linux distros not named "Ubuntu" have varying difficulties running steam. The distributions with smaller communities often will figure out how to do it, and then shortly a steam update breaks it. Or 32-bit games work but not 64, or vice-versa. Or ...


Yeah, one of the points which pushed me creating this image!

I have tested HL engine based games and CS:GO which has its csgo_linux64 binary, causing lots of people to complain they could not run it, since Steam itself is 32-bit one. With this image it is not a problem, since I am preloading all the necessary libraries. It, though, still needs some polishing... @all: I appreciate your advice / PR !


Yeah, it is why we (Linux Users) hate proprietary software.


I don't know much about Docker, but I know Steam for Linux is built for Ubuntu. Maybe Docker helps for distros like Fedora? The benefit is probably highest in the case of NixOS, because Steam changes stuff behind the scenes and so isn't really compatible with declarative package managers.


NixOS supports Steam wonderfully - it creates a half-container (alternate filesystem root, shared networking) which looks like what Steam expects, has full access to graphics drivers and your home directory, and can be executed like any other app (run "steam"). The full Docker infrastructure isn't really necessary.


@tormeh look at a Docker just like at a system that gets you the images (in "tar" archives) from the Docker Hub repo (or build them by yourself) and lets you run these "tar" images by leveraging the cgroups (Linux kernel abstraction feature that limits, isolates resource usage of a process). You may basically look at a Docker like at "chroot-on-steroids". And yes, it helps to run the images wherever you have Docker installed. There are few limitations though, especially when it comes to the point one needs to pass something from the host into a container (e.g. driver libraries, devices, special system paths or environment variables), since they may differ from distro to distro, from Platform to Platform.


NixOS generates a FHS compliant chroot for steam to be installed to, so it can do whatever it wants in that playground.


Steam is exactly the kind of software that should be distributed using Flatpak[0].

[0] http://flatpak.org/


Sounds like an excellent plan to end up with outdated libraries and full of security holes.


holy crap; that's cool! I had no idea it existed (maybe I came across it when it was still called xdg-apps... I definitely agree with the decision to change the name).


Why use docker instead of something like flatpak or snap?


Personally I'd love a Steam flatpak, but since the current images are based on Fedora AFAICT (no surprise there, GNOME/FreeDesktop project) it would take a lot more work since you'd have ta create a Debian/Ubuntu SDK to base it from.


Because Docker is enough and it does its job well? :-)


I use this on an otherwise free-software-only system (Parabola/Trisquel), since the only proprietary software I ever use is games, so I try as hard to isolate proprietary code as much as possible (without performance loss, which happens with VMs). This sort of goes with point "1. I want to set-up more fences when running the code I don't/can't trust;"


How is the GUI accessed? X11 socket sharing? Because I guess that for gaming both X11 over SSH and VNC would not perform very well...


The same way as every local application is accessing it, via a Unix domain socket /tmp/.X11-unix:/tmp/.X11-unix / DISPLAY=unix$DISPLAY :-)

This will give you a frame rate identical to your host, so there is no overhead running your 3D apps in the container.


Is there any overhead at all in using Steam (or anything) through Docker?

I'm not very familiar with using Docker.


The overhead is negligible (close to 0). ;-)


I proposed the same idea for GOG and their Linux games a few years ago. At that time they didn't get the point.


They still don't, since they ask their Linux users to install tons of libraries on their own. Not that they really care about Linux anyway... (still not GOG Galaxy client...)


Libraries are OK to install. You can do the same in the Docker container. What Docker does is better isolation from the rest of the system. You can do it yourself with cgroups / lxc, but Docker gives higher level management.


Been there, done that. Here's a video where I run Counter Strike through steam in a docker container.

https://youtu.be/ZHWsR8TnKsw?t=801

PS: Audio is in spanish.


I bet there are dozens or maybe even hundreds of people who run CS in a container. What, I believe, makes the difference is that when one shares with the reproducible results. :-)


The video I posted is a complete tech talk about how achieved it :)


Cool :) Pity it isn't in English.. :)


How does persistent state work with this? For example, what happens to my saved games? What if I update the image (for example to pick up a Steam update)? What will happen to those saved games?


Currently it is stored in a docker volume. (the data available at /var/lib/docker/volumes path)

But changing just one line in a docker-compose.yml file you can mount it wherever you want on your host. See "data:/home" in there.

You can write there "/home/your-user/mysteamdata:/home", so that all Steam games (caches, saves, ...) will be available for you at your home directory in "mysteamdata" directory ;-)


Mount volumes that correspond to save locations. The volume should be persistent so even if the image changes the data will remain.


Aren't save locations different on a per-game basis? So does this need configuration? If not, then how does it work without user intervention?


If you don't want to do something smarter, by nature of the file system that Steam rests on, there must necessarily exist some folder s.t. all save locations exist in a subfolder (recursive) of this folder. You'll use extra space, but you can definitely do that easily.

Alternatively, you can keep an environment variable GAME_SAVE_MOUNTS to which you append a -v flag with the desired mount saves. You can that start your container with $GAME_SAVE_MOUNTS and have everything work out of the box.


How is the performance on this compared to a bare metal install?

IE how many FPS do you get with a native Steam install vs Dockerized Steam on the same hardware?


It's been already discussed in this thread. The performance impact should be negligible. I am having ~110-150 FPS with my nVidia 560 Ti in CS:GO , 1920x1080.

The precise testing haven't been done explicitly to measure increase/decrease. But feel free to test it. ;)


How big is the % FPS decrease if you run the game inside container, compared to just running it regularly?


I haven't spotted the decrease. On the opposite, even some increase :-) (Or was that just a placebo effect? :) )

There actually shouldn't be any significant decrease since the Docker's overhead is negligible.


Could this improve or worsen the VAC mechanism?


It's absolutely irrelevant since Docker (cgroups) are just the Linux kernel abstraction which helps to isolate resources of the processes. And with this image the general idea is that it comes like a package, with "just take & run" approach, eliminating the need to depend on the specific Debian-based Linux distro (which is required by Steam and provided with this Docker image).

Then as discussed in this thread, it gives few security advantages, control benefits since those isolated resources are controllable.


thanks a lot for this clarification!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: