Isn't it just a thin wrapper around `flatpak permissions`?
While obviously a handy GUI, I've never been able to overcome my trust issues with allowing a random package from GitHub to govern permissions for my Flatpaks.
Ironically, I'd probably trust it a lot more if it came as a Debian package. There's a circular issue here where the people who are security conscious enough here to want something like Flatseal might also be too security conscious to install something like Flatseal.
So if you wanted to call ffmpeg or some other C library with complicated user-provided data, you can use extrasafe's Isolates (along with its seccomp and Landlock features) to sandbox the call. I'm not really sure how suited it is for rewriting something like bubblewrap or firejail, but it might be interesting to try.
Just wanted to add, though obliquely relevant to the OP, I have daily driven Qubes OS 4.1 for at least 12 months now and have been thoroughly impressed. It actually makes me feel safe[r] in my online computing today. 8chan or Discord -delivered 0-day blowing up in one of the respective qubes should not lead to exfiltration of my Robinhood credentials, my aunt's baking recipes, or my super secret hacker darknet forum libressl key. :^)
You want a beefy system. If you want to pass GPUs through, you need more than one in the system. CPU core count is important yes, but I cannot emphasize enough: RAM RAM RAM. Yes I do have automatic1111 running with GPU in my Qubes tower. Yes I can spin up a WinServ2022 Datacenter qube for Siemens NX 10, with full GPU support.
One thing that steered me away from QubesOS was that the templates used for all VMs were not hardened at all, instead, the Qubes Fedora template for example were less secure than a normal Fedora install. The template uses passwordless sudo and SELinux is completely removed. Security is about layers and the default templates are lacking in that sense. I want to make initial compromise as hard as possible. For QubesOS reasoning for not needing hardened templates to make sense, I would need to have a completely separate AppVM for each application and I don't think QubesOS was meant to be used like that.
Although I'm convinced that passwordless sudo helps a lot to make life easier for new Qubes users.
> For QubesOS reasoning for not needing hardened templates to make sense, I would need to have a completely separate AppVM for each application and I don't think QubesOS was meant to be used like that.
This is not necessary. You can group your apps with the same trust level in the same VM. Again, it's especially helpful to the new users. Advanced users like you, with strict threat models, can use minimal VMs to compartmentalize much more.
That sounds pretty good. Passwordless sudo and disabling SELinux are the first things I do on new Fedora installs.
If somebody can compromise my user account, compromising root is pointless and trivial. More so on Qubes OS, where only breaking out of the VM really matters.
SELinux isn't usable in a strict mode because it's config is so ungainly that devs rarely create profiles. So only the opt-in mode is common and that's pretty pointless. Especially compared to using containers.
Wouldn't passing through GPUs automatically provide enough information to tie all your VMs together? Unless someone has found a way to make GPUs pretend to be common consumer GPUs and not report any individualized fingerprints, then I would think that any online service that has GPU access can easily fingerprint you. Plus, in my experience VMs also tend to report a lot of system information that makes it apparent what hypervisor and virtual devices you are using.
Sure, each VM may be isolated in the sense that they each have their own apps and isolated file systems... but that is exactly what sandboxing does, except without the overhead of complete virtualization at each layer. I'd guess that Windows Sandbox would provide the same security in a much more efficient manner. On Linux, the integrated sandboxing solutions should also provide isolation like a VM but with better performance.
I understand Qubes design decision, but honestly think it is a bit antiquated and cumbersome when in reality you don't gain much compared to other solutions.
I am this close to rebuilding my machine in Qubes. Is 32GB enough RAM or am I going to be seeing usability issues?
The proliferation of AI tools really wants me to grab more of this wild-west code, but obvious security concerns there. Am I going to take a huge hit in performance if I am running this stuff in a virtual machine like llamafile which runs on the CPU?
Secondary GPU for pass through is less appealing, but I suppose I can stomach it. Is there a good guide on how you configured this? Many moons ago, before Proton was so good, I had looked into doing GPU pass through, but quickly gave up on the technical complexity.
Anything else notable? Problems getting web cams or microphones working? I assume you cannot screen share across Qubes.
> Is 32GB enough RAM or am I going to be seeing usability issues?
It should be sufficient for quite some time, until you run 15+ VMs simultaneously or so. I reach this limit only when I open too many https links each in its own VM.
> Am I going to take a huge hit in performance if I am running this stuff in a virtual machine like llamafile which runs on the CPU?
Anything relying on GPU gets significantly slower on Qubes (without the passthrough); ordinary apps relying in CPU are fast though.
>Secondary GPU for pass through is less appealing, but I suppose I can stomach it.
Anyone using an Intel CPU from the past ~13 years or an AMD APU or Zen 4 CPU should have an iGPU to render the host with, unless they specifically opted out, leaving the GPU free for passthroughing.
Bubblewrap is really a great sandboxing tool and I'm glad to see efforts to make it more "user friendly." One key thing to understand is that although it is used by Flatpak, you don't need to use Flatpak to use bubblewrap itself.
The way I currently use bubblewrap is to write a small shell script with the correct invocation every time I want to sandbox something new. It isn't a great ux. The Arch wiki has documentation on how to do this [0].
I think probably because of its association with Flatpak there hasn't been a lot of work on improving the ux for "non-Flatpak" bubblewrap use cases, so I applaud the OP for their endeavors. I'm not sure I'll be switching from "just a bunch of shell scripts" for a third party tool like bubblebox (or the extant 3rd party GUI, bubblejail [1]), but it's a good idea to help improve the usability and open it up to more users.
>> In contrast, regular package managers solve this problem pretty well.
Except the upstream dev's are sick of dealing with "It doesn't work in my distro" ... there are plenty of instances of these sorts of pissing matches.
bwrap, flatpack, containers... were really talking about software distribution. I dont think any of these solutions "solve" the problem, we're just putting all the bullshit in one bag, not getting rid of it.
> bwrap, flatpack, containers... were really talking about software distribution. I dont think any of these solutions "solve" the problem, we're just putting all the bullshit in one bag, not getting rid of it.
I agree, personally I prefer flatpaks more often then not because they "just work" for most programs I use (that aren't on the terminal) but Its far from an elegant solution, still, I do generally like a lot of the work their doing like the various xdg-desktop-portal permissions that are standardising a lot of common "desktop" interactions
I'm with you on this. Even if I can dig into issues that come up and have, if just rather but most of the time. Not to mention that flatpak, AppImage and docker leave my Host pretty unscathed and allow updates to go much more reliably.
> Except the upstream dev's are sick of dealing with "It doesn't work in my distro"
99/100 it does work on their distro, they just need to handle their libraries better. Identify the packages they need to support the software, or if no packages exist then they need to compile the libraries themselves. The entire point of Linux distributions is to do this job for the user. Sometimes the distro will fall short, so you need to have the basic understanding to handle it yourself.
Of course, Linux users aren't as tech savvy as they were 20 years ago.
>> The entire point of Linux distributions is to do this job for the user.
You would think, but we have some fair conflicts between distros and upstream software.
Bottles is throwing the gauntlets down and said "we dont care about your bug if your NOT using the flat pack, if you got it from your distro its your distorts bug". A lot of other bits of software are moving this way because they just dont have the bandwidth to support the disros' nonsense (in their eyes)
Some software maintainers are looking at flatpack as a way to right size the effort in supporting users.
> "we dont care about your bug if your NOT using the flat pack, if you got it from your distro its your distorts bug"
Two cases here:
* it's a legitimate upstream bug, but is being rejected because you are using a distro-packaged version, not an official upstream release. Reproduce the problem with the upstream release, report it, done.
* it's a distro-specific bug, you cannot reproduce it with an upstream release. In that case you should indeed report it to the distro, not upstream.
I really love the theory of NixOS... an entire OS that is declaritively (or dynamically) defined. But the nix language itself does not seem like the right approach. A lot of packages just break out into command line scripting because of its limitations. As much as I've despised the JavaScript ecosystem in the past, it seems like TypeScript with Deno which has built-in JSON/TOML/INI/YAML and sandboxing would be a better language to handle automatic system configuration overall... plus, you'd have the advantage of a web app to manage your system config(s), sort of like what fish does.
> I really love the theory of NixOS... an entire OS that is declaritively (or dynamically) defined. But the nix language itself does not seem like the right approach. A lot of packages just break out into command line scripting because of its limitations.
What you're describing is the use of bash builders in Nixpkgs. This is conventional because lots of build systems expect bash anyway, and because it lets you easily incorporate the whole Nix build environment for a given package into a conventional interactive shell for debugging, where you can invoke functions that are available in the build environment and even copy/paste snippets from the build to inspect their behavior. I like it; I think it's a good approach!
It's worth noting though that Nix supports using builders in arbitrary languages, and there are some rare uses of non-bash builders throughout Nixpkgs as well.
Some other points:
- if you prefer a using the same language for your package recipes as the builders themselves, Guix has you covered.
- Nix has built-in support for reading and writing to many some of the formats you mention, like TOML and JSON. Some tools use those languages to provide an external configuration language for Nix, for the simple stuff, e.g., devenv, devbox, Flox, etc. One could certainly attempt this with NixOS as well.
> lot of packages just break out into command line scripting because of its limitations
That’s not a breakout, nix just provides the glue around shell commands, because that’s the most readable/accepted way to specify process invocations. Nix should be thought of as a data store, it’s just a huge object with keys and values.
This idea that we need to spoon feed simple solutions to users of free software really needs to die. If you're giving code away you don't owe your users much.
I might even say that if you're giving code away, you don't owe users anything. But you might also have goals for your project that would benefit from trying to attract, on onboard, or otherwise please users.
The idea of spoon feeding simple but incredible solutions for free can be very rewarding for maintainers, and can also generate a monetary reward as well. There are projects that I happily donate to because they create value for me. How many terms of service have you read of apps that you pay for? It seems that many apps you pay don't just take your money, but your privacy and rights away too. Developers creating something in the open can inspire and help make communities of people working on a shared goal, that attracts quality individuals from all over the world to help. It brings people that are excited about your goals and that came to your project because it was something they could find and appreciate. I'd say, you don't get the same from closed-source software. You also can't achieve the same by being an asshole to potential contributors just because you feel like it. Sure, it is an option, but if you want to actually build something... not just for you, but a dream, then it is a bad choice to not try to build a healthy community.
Typical package managers also can't tell you if an app bundles its own library. Whether an app runs on a flatpak runtime or directly on your host system, some reliable eyeballs need to inspect the app's bill of materials.
The package manager can't, but the distro can. AIUI it's perfectly common for flatpaks to bundle things in, whereas ex. Debian has explicit policies against doing so (with caveats, yes, but generally).
The package manager is a technology, while the distro is a set of people and policies. It's fairer to compare distros with Flathub, the most popular repository for flatpaks (although not the only one; for example, Red Hat maintains its own flatpak remote for RHEL customers). And yes, Debian likely has stricter rules than Flathub. But those don't apply to any software you might require outside the distros' repos -- just take a look at Google Chrome, probably one of the most popular DEBs or RPMs; it's full of bundled libraries.
I dislike bwrap because it is built around a CLI-arg based approached for sandboxing. Even the wrappers that allow config-based sandboxes just end up transcribing a file to CLI args. I'd prefer it actually have an API of sorts. Instead, it seems like it was never meant to be nothing more than a quick and dirty tool for basic sandboxing in a pinch.
Command line flags aren't an efficient way to pass 20+ args to a program. Sure, it works but that is why I get the feeling it was only ever designed for quick and dirty sandboxing.
Is there any indication Linux is going to adopt a pledge/unveil api in the near future? It's hard to place faith in systems like app armor that rely on system administrators to implement security guarantees.
I'd say that's a weird request as pledge/unveil are significantly less secure, even coming close to security theater IMO. Something like AppArmor can protect against a threat without having to rely on the developer to have done the right thing.
Pledge/unveil would not help with the xz issue. AppArmor would.
I would say that they are different tools for different purposes. Sure, with pledge/unveil/seccomp you trust the process to do the right thing, it's really about restricting against threats external to the process. But it allows the process to have different privileges at different parts of the program, while external restrictions are not that flexible.
I think it makes sense to use both. Use external restrictions to only give capabilities that the process will ever need, and within the process drop capabilities when they are no longer needed.
> I'd say that's a weird request as pledge/unveil are significantly less secure, even coming close to security theater IMO.
Compared to not using AppArmor, or compared to using AppArmor? The situation I'm concerned about is having an opt-in profile. (NB I don't understand how AppArmor works—linux security profiles are super confusing.)
I kinda think having a tool to invoke `bwrap` is overkill for most cases. Applications I want to launch more than once just get a launcher script with `--bind "$(dirname "$(realpath "${BASH_SOURCE[0]}")")/FakeHome" ~` or other flags as appropriate. Same for tools that I don't want to install account-wide. Opening a terminal emulator in a `bwrap` sandbox and then using it to launch other applications and create new windows feels like using a VM, except it's all running natively.
It's also quite useful to have aliases/functions in `.bashrc` for `--tmpfs ~` or `--bind "$(mktemp -d)" ~`, `--bind "$PWD" "$PWD"`, `--ro-bind "$PWD" "$PWD"`, etc., with varying levels of access to X11/DBus etc. You can quickly test how a code change behaves on a clean install, instantly see what configuration files something uses by running it in a clean sandbox and then running `tree`, and check on your first time using a new program that it isn't going to do anything crazy.
I usually use some variant of `sandboxed-dir bash` or `sandboxed-rodir bash`, which I've `alias`ed to `bwrap` with flags as the names describe, whenever I'm about to either run `rm -rf` [1] or otherwise touch files which I'd rather not accidentally destroy. The BubbleWrap sandbox isn't useful just for security, but also for keeping configurations isolated/clean, and protecting against all manner of errors both in software and between the keyboard and chair.
---
[1]: `echo rm -rvfi`, actually. See what you're doing before you run it for real, especially if there's a glob in there. And then the `bwrap --dev-bind / / --tmpfs ~ --bind "$PWD" "$PWD" bash` makes absolutely sure you're not going to do anything other than what you said you'd do.
Normally this is Ubuntu / Debian, but if I had TFA's specific problem of "I want the software to use my distro's /usr" I'd use a container with the base image of my distro.
I don't bother sharing dbus, but if you did want to share (a subset of) it as TFA does, then yes xdg-dbus-proxy is a necessity.
Which you can see it as the longer form documentation of bubblebox (explaining in excruciating details how to sandbox desktop applications with flatpak).
(not affiliated to the author, written prior to bubblebox — but I’m happy to see that someone took up the job of making a true solution of implementing the ideas I laid out in my blog posts series !)
I would love to see a Linux distribution picking up on the concept of NixOS, with its content-addressed-package store and then use overlayfs and bubblewrap to create process local FHS compatible root file systems with only the dependencies of each process inside.
Well, both Guix System and NixOS can do this, where they construct a containerised environment with FHS layout. IIRC neither uses overlayfs and they do the namespaces by themselves instead of relying on Bubblewrap, but inside the container you get exactly what you'd expect.
At least in Guix it's `guix shell -CF`, where `-C` is of course the `--container` flag, and `-F` for `--emulate-fhs`. I'd imagine that `nix-shell` works the same way, but I'm more familiar with guix than nix, so I can only speak for that.
I only ever tried NixOS, but the FHS stuff there seems to only be about getting Steam and other proprietary software to work. I would suggest to have this per default, so that the only application that actually sees the real file system would be PID1.
That is an interesting idea, but I don't really see what the purpose would really be.
As you said, having these kinds of FHS containers really is only actually useful for proprietary software, with free software being adaptable for a "non-standard" filesystem. Of course, it's sometimes helpful to do this even for free software, which is of course why this is an option, but I do struggle to see what would be the idea with making it the default for all non-PID1 apps in such a system.
All I could really reckon is that it might be easier for user familiarity, but at least in my experience, one gets used to the new layout pretty quickly. You'd ideally be configuring the whole system in either Nix language or Scheme anyway, so things like `/etc` are a bit superfluous, and since you can get `/bin/sh` and `/usr/bin/env` symlinked for things that expect it, most things should work anyway. Well, unless done poorly, but nothing one couldn't patch.
In NixOS you have only one central package store `/nix/store`, so all packages need to be installed there. This is necessary, because the software is patched to use that path. IMO patching it this way is a bit hacky, when there a better options.
If overlayfs is used, the software would not require patching, so you could have multiple package stores if you like. Maybe a package store in `~/.nix/store` as well. This would allow with a immutable rootfs (e.g. immutable `/nix/store`) to still install packages per user, or even per project a different package store.
But I've been using Atomic Fedora for the last 2 years and my whole work flow has shifted to using containers for everything CLI, I can have specific container images for work on qemu images, or terraform for example. And everything else like Steam games, VLC, Firefox all run in Flatpaks.
So the end goal is to not make any modifications to the host image.
> So the end goal is to not make any modifications to the host image.
IMO the goal should not be to have a immutable host image, but to have a immutable rootfs of each process. Immutable host base images might contain libraries or software that is not required by the application, they are more difficult to update, because they need to be updated as a unit, and probably require a reboot.
To make this stable, you need to hack around with A/B partitions for a fallback. IMO this would not be necessary if we had every package just installed under its checksum under `/pkgs` or what ever, an then use overlayfs to create a file system customized for every process. All that is required here to revert back is the old 'profile' and then start the old packages from the `/pkgs`. Like NixOS does it.
However NixOS doesn't use overlayfs, they use symlinks and patching of the paths in the sources to archive it.
Using overlayfs would also allow changing the package path (`/pkgs` here, or nixos package store `/nix/store`) at runtime, because the software would not require patches, so people could install an additional store in their home directory for instance.
Sounds like you're leaning more towards the Qubes OS model, even though I'm sure you don't want to go that extreme.
Either way I'm very happy with Atomic Linux, it's a huge improvement. I'm sure there will be more improvements that will offer separate rootfs for each process. Isn't that already handled by running flatpaks in their own container root?
> Sounds like you're leaning more towards the Qubes OS model, even though I'm sure you don't want to go that extreme.
Qubes OS is full virtualization. I don't want to boot a separate kernel for every process I want to start. That is not very useful.
> Either way I'm very happy with Atomic Linux, it's a huge improvement. I'm sure there will be more improvements that will offer separate rootfs for each process. Isn't that already handled by running flatpaks in their own container root?
As the article here says, not all apps can be used as flatpak.
Currently flatpak is mostly for GUI desktop apps, most common cli tools are missing. For instance there is no gcc flatpak.
Also the layers under a flatpak are not separated by individual packages but by broader runtime environments, thus they contain more stuff than each individual application requires. I want a system that could even be deployed on a small embedded device, because it is so generic.
I have containers for everything CLI. Like I'll have one specialized for build tools, one for manipulating qemu images, one for Terraform/ansible and so forth.
I even build cargo packages in a container, install the binary, and then run it outside the container.
> As the article here says, not all apps can be used as flatpak. Currently flatpak is mostly for GUI desktop apps, most common cli tools are missing. For instance there is no gcc flatpak.
I would consider Fedora Silverblue/CoreOS, NixOS and even flatpak as a sort of hacky way to implement that. I don't know of any PoC for this. I would guess that there might be some research budget available for that approach.
Why do the global overrides for Flatpak need "!~/.ssh" when they already have "!home"? Doesn't the latter deny access to the whole home directory? (If not, I would have expected many more entries in that deny list.)
Doesn't ibus manage that for you? none of my programs have access to this file but I'm still able to use the changes I've made (as long as I restart ibus after changes)
shrug all I know is I fixed some issues with compose sequences by giving flatpak apps access to XCompose. It may be something with electron apps not properly using ibus?
Alpine for instance, but by taking care of not installing non-free stuff. I already run a libre kernel, so most of the stuff I installed it's fine, such as mosh and dependencies.
I thought it was going to be an exciting secure sandboxing solution, but found out the people running it just make terrible choices overall. Apps don't use their conventional names, so chromium ends up become org.chromium.Chromium & Visual Studio Code is com.visualstudio.code. They use fuse for mounts, which isn't necessary in modern Linux kernels that support nonroot overlays (think Podman).
The whole point of sandboxing is to allow it to "hide" your real file system, and only mount what you need or permit access through D-Bus.. however, you'll find several apps that default to host access and access to all your /dev. They only provide a few device options as well... so, if you just want to give access to a webcam, well you have to provide all of /dev in most cases. Flatpak leaks all kinds of info about its sandbox too, so any app can easily determine its sandbox.
If you want to setup custom mounts, for example to mount a specific host directory into a different sandbox directory, well you can't. It has to be a direct mapping from your host except it prevents you from mapping /etc/ directories. To work around this for sandboxed home directories, you can use a persistent directory mapping.. but now, you'll be trying to access your files in `~/.var/apps/org.chromium.Chromium/Downloads`.
They also use a file forwarding concept, which doesn't work well when mixing arguments and files. For example, try using org.vim.Vim to pass arguments and open a file from the command line. Or, try editing system files with Flatpak where you need sudo or some privilege elevation. Why doesn't org.vim.Vim get aliased and work like regular vim?
The typical solution suggested to fix anything is to add `org.freedesktop.Flatpak=talk` to the session bus, and use flatpak-spawn from within the sandbox. Many users do this not knowing the consequences. This is generally terrible advice as it will give any app in the sandbox complete host access. But what happens when you need access to host tools or native messaging apps from inside the Sandbox? Again, the top answers are to use flatpak-spawn.
They have "SDK Extensions" for a few apps which are like binary dependencies, but many tools needed for things like Visual Studio Code just don't exist. Try using the Ansible extension in VS Code for example. Packaging them is a nightmare because they use a hodgepodge of tools/scripts inside a random GitHub repo to manually get all the Rust crates, node modules, python packages, etc. and add them to a nonstandard file inside your build manifest. It makes no sense why they never added this functionality to the builder itself. Development seems to be stalling.
If you do build an app, good luck getting it on Flathub without some authoritarian mod telling you they don't like your app (see the Popcorn Time decision).
Now, there are some good things. The build manifest is pretty neat. I like the idea of having a simple manifest to build apps that integrates with build tools, but the externals scripts it relies on makes it suck.
Sandboxing isn't a unique concept. I can see an enthusiastic Rust/C/C++/Go/Typescript developer or developers doing the same thing 100x better. IMO sandboxed apps shouldn't even know they are sandboxed. Each app should have a clean environment and ideally you'd pick and choose what files/folders you want to give the app access to while it is running, and not let the app builder decide for you. Also, the user should pick and choose where to mount directories if they choose to do so.. not make ~/Documents mount to ~/Documents... what if I want to mount /mnt/dev/docs to ~/Documents. If you care about this stuff, then you may end up frustrated like me.
> Each app should have a clean environment and ideally you'd pick and choose what files/folders you want to give the app access to while it is running, and not let the app builder decide for you. Also, the user should pick and choose where to mount directories if they choose to do so
It surprises me that Linux has no proper and detailed sandboxing out of the box. Are Linux users running untrusted closed-source applications and potentially backdoored open-source programs with full privileges? Maybe they should just post their root password online, it won't get worse anyway.
Security is in terrible shape. Right now for power-consumers the most secure OSes for access control are GrapheneOS (AOSP) and Qubes OS. Windows (with privsep) and desktop Linux are (were) good at stopping drive-by printer driver installs but terrible at keeping your browser running rogue glowware exfiltrating blackmail to use against you (or planting felony charges to leverage). Qubes puts everything in boxes and you keep the spam and ham in separate boxes, and Graphene gives you Android's use of SELinux as well as Storage Scopes (it's like unveil()).
Graphene is nice but only works on Pixel hardware. e/OS is a good alternative. Qubes isn't really about access control so much as it is about having disposable vms for doing sensitive things.
I wish I could use Graphene without needing to buy into their anti-OSS stance and ideology, though.
The core idea is sound, but I frequently see:
- MicroG is "insecure" because it requires "signature spoofing" (reality: LineageOS allows only MicroG to replace only Google Play Services - I wouldn't consider that "insecure" and the Graphene devs never explain further)
- Open source software is less secure than closed source software (reality: mixed bag)
- F-Droid is "insecure" (I can't address this one because I don't understand the point of view and I've never seen it clarified)
- GrapheneOS executes Google Play Services in a "sandbox without any special permissions" (reality: there is a repository full of code that allows sandboxed Google Play to behave in a way other apps can't within the sandbox)
- Automatically installing updates is necessary to "be secure" (reality: updates might add or remove vulnerabilities. Failing to update when a vulnerability exists is insecure, but updating automatically is not a clear win for a savvy, aware user)
- System backups are not a part of a security policy (reality: having the ability to restore a backup is necessary to mitigate damage caused by an exploit)
- Pixel devices are the only "sufficiently secure" phones (reality: it might be convenient targeting a single platform, but there are plenty of phones now that provide the technical capabilities Graphene requires)
- Privacy is not part of security (reality: security is classic CIA - Confidentiality, Integrity, Availability. If you can't keep confidentiality you're missing one of the pillars)
- Complying with Google's Compatibility Test Suite is "more secure" than deviating from it (reality: the CTS is designed to favor app developers over end users. The app developers might or might not be better at looking out for users' interests than the users themselves)
Everything I'm representing as a GrapheneOS dev position I've heard straight from the horse's mouth on their forum and/or in their documentation. I think sometimes they hold opinions which aren't 100% aligned with what I'd consider a maximally secure system.
That said, they have delivered a pretty solid OS if you can live with its various compromises. The one that really grates on me is the last one: I don't want an OS that chooses an app developer's "don't copy this file" flag over my desire to copy it!
I tried GrapheneOS recently and switched back just because I lost my automatic backups. The backup solution and syncing should have been solved before some of the other design choices IMO. I was also disappointed to see that there is no way to prevent the warning message on each boot, which seems to delay the startup (or at least none that I could find).
Other than that, I could be content with it.. it may not be perfect, but it felt good to know that I wasn't tied into the Google eco system. I liked their contacts and location sandboxing.
I've heard some complaints about GrapheneOS's choices and wish that there was more communication from the lead developers about their design choices, just to understand them a bit better. However, I do think some users are just reactionary overall to some things. They hear Chromium and just assume it is worse, but honestly have no clue from a development perspective. All your complaints sound legit though and I too would like to know the answers, because you bring up very valid points.
There has been some drama in the F-Droid community, and I too tried alternatives, but in the end... my favorite apps all recommend F-Droid, and I find using F-Droid 10x easier than the alternative of managing apps from random locations. I believe the biggest complaint with F-Droid is that it is a single-point of failure or compromise since they sign the packages and updates can get delayed, however, in reality adding random Git repos to Obtanium or whatever is just as insecure. I believe Fdroid even does some security scanning too, but I could be wrong... which is an added bonus.
Most of the GrapheneOS information you might be interested in is in a few different places (Reddit, Twitter, Discord) but quite predictably they have become unworkable for searching especially if you don't have an account on those platforms. GrapheneOS also has their forum for that purpose but it is quite young compared to the project.
>I was also disappointed to see that there is no way to prevent the warning message on each boot
That bit is part of Android https://android.googlesource.com/platform/external/avb/+/mas... and would be the same on any device following the same implementation model (most of the Android world although some don't do it properly). They have been looking into a hardware company partnership though, which could potentially mean a device that boots GrapheneOS without such a warning.
>All your complaints sound legit though and I too would like to know the answers, because you bring up very valid points.
Some of the points raised might be argued by community members but that does not mean they are representative of the GrapheneOS project's positions. I would stick to the official GrapheneOS accounts on social media if you want the project opinion. The founder is also a fully public development member of the team so you can look to their posts for official positions as well.
>in reality adding random Git repos to Obtanium or whatever is just as insecure.
Yep, I believe this also isn't officially recommended by GrapheneOS either anyway. GrapheneOS had always been clear that one of the big benefits to basing on AOSP is the massive ecosystem of open source Android apps. They also integrated F-Droid and collaborated with the project in the past. Their requirements for app stores have developed in line with their project goals over the years so there is a lot they are looking for now in an app repo that F-Droid does not provide.
> MicroG is "insecure" because it requires "signature spoofing
But it is literally a less secure option, than what Graphene does provide, sandboxing the original google api itself.
> pixel
Yes, if you can literally just start any OS from the hardware level, you don’t have security. The same way you can just read/edit an unencrypted linux distro partition, and “login” to that same OS.
> But it is literally a less secure option, than what Graphene does provide, sandboxing the original google api itself.
Let's define "less secure" as "vulnerable to additional threat vectors when compared to another option".
If the operating system allows a verifiably signed MicroG in /system to replace Play Services, what is the threat vector that opens up? Keep in mind that I'm intentionally trusting MicroG more than Google, so anything that happens as a result of a compromise of MicroG itself is a trade-off for avoiding anything that happens as a result of a compromise of Google Play Services and so not "less secure".
Explain what "less secure" means in this context, please, because I don't get it and nobody ever have.
> Yes, if you can literally just start any OS from the hardware level, you don’t have security.
That's not how Samsung Knox works. It's verified boot.
Also, I don't care if any OS can start if any OS can't decrypt the encrypted root partition due to it using a TPM-held key backed by measured boot registers, and that's also possible on several phones.
As I said twice, LineageOS allows the exact signature of MicroG to replace the exact signature of GMS (core google play services component).
An app signed as MicroG can't spoof anything other than GMS. Nothing with any other signature can spoof GMS. An app not signed as MicroG cannot spoof anything.
I'm unclear what the problem is here, because the desired outcome is MicroG replacing GMS and that is exactly what LineageOS allows, without allowing anything else.
I hadn't read that and it's informative. Let's paraphrase and dissect its arguments.
> 1. F-Droid releases are signed with F-Droid keys, and Google Play releases are signed with Google Play and app developer keys both
I don't feel this is an issue. Google is the party in exclusive control of which app developer keys are deemed valid, so the app developer signature adds exactly nothing. You either trust the channel to give you a correct application or you don't, and I think it's quite silly to implicitly suggest that end users would verify app signatures to notice a compromise of the app store they use.
> from June to November of 2022, [F-Droid's] guest VM image officially ran an end-of-life release of Debian LTS
This is in the past and not a strong argument. We don't know what individual app devs run on their build boxes - some of them are surely the devs' own laptops infected with malware, so this argument cuts more strongly against Google than for it.
> While deterministic builds are a neat idea in theory, it requires the developer to make their toolchain match with what F-Droid provides
This is straight-up dismissive bullshit. Reproducible builds are a valid alternative to signing the build output, and in my opinion superior as they guarantee you can actually build the app from its sources.
The paragraph containing the explanation of why deterministic builds are a bad idea is the type of paragraph I was railing again when I wrote my previous comment: it undermines my faith in the Graphene devs' judgement.
> 2. F-Droid app releases are slower and less frequent than Google Play releases
Some apps update on F-Droid faster. This is mostly down to the app devs and the relative popularity of the two stores. The Graphene devs try to make the argument that there's something in the F-Droid build process that's not automatic, but so far as I know this is not true.
Also, again, my point of view is that "faster updates" does not mean "more secure". If you quickly release a new vulnerability that's bad, so I'd rather have strong review of each new software version.
> the fact that their build process is often broken using outdated tools
Users should be free, when informed, to do this. It's not a security problem. F-Droid displays the target SDK level. I don't think we should automatically consign all unmaintained software - and devices - to the graveyard in the name of security.
> 4. We generally don't like the dev practices of the F-Droid maintainers
Outdated information here again, talking about things F-Droid doesn't do any more like using jarsigner. If they wanted to say "we don't trust F-Droid to do the right thing" just say that and delete the rest of the page, but it's an opinion not some kind of factual stance.
Or, you know, pick the particular problems you have with their practices, and then start supporting them instead of tearing them down when those problems are fixed. As they are now.
> Their client also lacks TLS certificate pinning
The client downloads apps and verifies their signature. This is the first argument I've seen here as legitimate, because although the problem is minor, connecting to the "wrong" store means you see the "wrong" versions of apps. But of course you couldn't use this to inject malicious code, because again, verified signatures. The devs on the page just say it could "lead to various security issues" which is overly vague for me to address more directly.
> 5. The F-Droid client has a confusing UX
Not a security issue.
> 6. The F-Droid client has a confusing UX specifically about permissions
Not a security issue.
> Play Store restricts the use of highly invasive permissions such as MANAGE_EXTERNAL_STORAGE which allows apps to opt out of scoped storage if they can’t work with more privacy friendly approaches
I should have the ability to install applications that work how I want them to work. Google shouldn't be able to prevent me from having a Gamecube emulator that stores its save files on my MicroSD card in a performant fashion.
My overall verdict is that the page you linked (thanks) confirms my opinion of the GrapheneOS' developers stance: they choose a position, and then they throw together whatever arguments they can find that might be construed to support that position, regardless of merit. They intentionally use vague wording when they know the position they're supporting is weak, to instill fear in less-tech-savvy users who cannot sort the bullshit from the substance. It leaves me with the feeling they're not arguing in good faith.
Can't say anything for the impression you get from the style but I can at least say that GrapheneOS argues different things and differently on quite a few of the points made about F-Droid. They have also always been big advocates for open- source Android apps and explored/implemented F-Droid integration in their OS at points in the past.
I'm familiar with the PrivSec people and they are different from GrapheneOS with a tiny bit of community/mod overlap. There isn't any GrapheneOS dev presence in the PrivSec project.
I hate Pixel exclusivity and dropping past-EOL phones, but Storage Scopes (aka unveil()) is the biggest reason I stay vs switching to an alternate.
>The one that really grates on me is the last one: I don't want an OS that chooses an app developer's "don't copy this file" flag over my desire to copy it!
Wholly agree. I think we have a huge problem with loss of [device] owner-sovereignty in the economy, with enshittification, rentiering, and surveillance capitalism rising to sap away more value from customers like customers are some sort of planter garden to glean from.
Business today has become like the drug dealer business: how do we get 'em hooked and coming back for more and more? Selling a lasting quality product at a painful but fair premium is out, sucking wallets dry $1.49 at a time for somethings which last a short time before "engineered obscelescence" kicks in to force a re-buy is the in game now.
Linux has lots of sandboxing features. Many Linux distros only use them minimally because the value proposition is poor: Most distros don't ship closed-source applications and in the last 30 years there have only ever been a tiny number of backdoored open source packages. OTOH, sandboxing tends to break the kind of complex workflows that users are fond of.
Another angle to point out is that some distros do bake in more protective features; consider Fedora using SELinux. Then, again, keep considering SELinux and observe how many things it breaks the moment you set a single foot off the trodden path.
> there have only ever been a tiny number of backdoored open source packages
First, these packages might contain vulnerabilities, second, I want to be able to run also third-party software including closed-source software, Windows software and random scripts from Github. If you restrict yourself to official repositories, then you cannot do much useful work.
> consider Fedora using SELinux.
Because SELinux is not what I want. I want a GUI with checkboxes like "Allow playing sound" or "Allow reading CPU model" rather than describing access to every individual file.
The Linux sandboxing features are mostly quite poor tbh. Compare to Apple's stack (if you include the private SBPL) and it doesn't come close. Apple is way ahead of both Windows and Linux when it comes to pervasive sandboxing.
I'm not super familiar with Darwin, but I am skeptical on the grounds that macOS doesn't have containers, which are constructed from the same primitives as a sandbox.
It does indeed have something similar to containers, built on their sandboxing tech rather than namespaces. Take a look in ~/Library/Containers to see the Apple equivalent.
It’s not about backdoors. A good chunk of linux userspace is written in unsafe languages (IMO, for no good reason). A well-intentioned buggy program opening bad-intentioned data is enough to cause trouble, and for some reason, no one seems to care about it at all. Linux distros has no security whatsoever, by any practical definition.
The issue is often not the individual features, but delivering them as a consistent, usable package. Though Flatpak is getting there, but it's a long road, you don't just need some kernel sandboxing features, but also toolkit extensions to make files available to sandboxed applications (portals), etc.
Many Linux distros only use them minimally because the value proposition is poor: Most distros don't ship closed-source applications
That's kind of putting your head in the sand. A lot of users need closed-source applications for work, such as Slack, Zoom, Chrome, Obsidian, JetBrains IDEs (the non-open source flavors), or for fun (Games, Steam, Spotify). Some have web apps, but the generally work less well.
a tiny number of backdoored open source packages
Backdoored applications are not the only issue. Also non-backdoored applications with vulnerabilities, basically any client, etc. The saving grace of the Linux desktop is that it's not much of an interesting target due to relatively low popularity. Otherwise actors would be hunting for vulnerabilities in RSS readers, chat clients, etc. (remember that the surface is not only the application itself, but also any image/video decoding libraries, etc.).
OTOH, sandboxing tends to break the kind of complex workflows that users are fond of.
That's an issue, especially for Linux power users. For most users, sandboxing can be done without too many issues (see e.g. sandboxed Mac apps).
Linux is a kernel. Let alone sandboxing, it does not even have an init process or bootloader by default. You can install all the utilities expected of a modern desktop OS if you want, and most distros will do it for you.
Not every app needs to be run with "full privileges" (or "as root" as some might call it), but it does usually end up being a binary decision on most distros if you're not careful. Either it can change everything or it can only affect the entire user profile it's running under. Bwrap and friends allow you to do a lot of extra isolation and get much more granular, but non-root console-only apps are generally safe to run (and safer still to run under their own user instead of yours).
[0] https://flathub.org/apps/com.github.tchx84.Flatseal