Hacker News new | past | comments | ask | show | jobs | submit login
Subgraph OS: Adversary resistant computing platform (subgraph.com)
182 points by mboroi on Nov 23, 2016 | hide | past | favorite | 87 comments



This is a step in the right direction (in the sense that we should sandbox applications harder), but in my opinion we have to change fundamental aspects of our stack (e.g. Proprietary Firmware <=> Linux <=> GNU-System-Libs <=> X <=> GTK <=> Evince), to gain more security.

In particular I think it is harmful that all applications share the same view on the FS and have in principal the possibility to use e.g. full unixoish capabilities. My bet is that the solution is via better type systems, e.g. an application that is a desktop game could have something like

  exec :: GameConfig -> WindowControl ()
where GameConfig is e.g. some CFG specific to the game and WindowControl is similar to IO () however limited to interacting with a drawing library (e.g. OpenGL) and input systems (keyboard and mouse local to the window).

At the moment every application just implements `main()` and is good to go and we separate between kernel- and user-space (and a VM on top e.g. Android and Apple), and maybe this is too coarse.

I think pledge (http://man.openbsd.org/pledge) is also a step in the right direction however I would prefer it to be the other way around: an application goes through a setup process where it gains the capabilities it needs (in pledge it's the other way around, you ask to drop them).


You might be interested in coeffects. Just like monads can be used in a language like Haskell to model effectful operations, you can use the dual of monads, comonads, to model the dual of effects, coeffects!

Coeffects can be used to represent the "context" of a program, which includes things like permissions or capabilities that the program may have access to. They provide a fascinating way of modeling all kinds of information that is traditionally not handled by even powerful type systems like OCaml's or Haskell's.

You can read a lot more about the topic on Tomas Petricek's website: http://tomasp.net/coeffects/

I especially recommend this short article from 2014: http://tomasp.net/blog/2014/why-coeffects-matter/


Thank you, that sounds very interesting.


> limited to interacting with a drawing library (e.g. OpenGL)

Isolation between applications running on the same graphics hardware is rather weak (GPUs don't have something like a MMU), so that exercise is left to the reader ^W driver getting a lot of stuff right. Most don't, or didn't. That's why eg. Qubes doesn't allow sharing a graphics card among domains (well and the fact that the drivers don't support that either), so an untrusted system can only get it's dedicated GPU, with no sensitive data ever going on the same hardware, and the DMA capabilities of the GPU are kept in check by the IOMMU of the CPU. The host only gets involved in blitting the framebuffer somewhere else for display.


I think you're looking for capabilitiy-based security. It can be done at OS and language levels to allow enforcing POLA pretty easily. Here's a page with intro's plus deployment in web browser and GUI prototypes:

http://www.combex.com/tech/index.html

Most prominent language is E:

http://erights.org/index.html


Any solution which requires rewriting existing software is impractical. There's nothing wrong with an application having the illusion of full control of the system that sandboxing provides. I'd like to see an OS where all applications are run in a sandbox (e.g. LXC containers). Each application should have metadata which describes what special access it needs (e.g. Android's permissions) but the user is free to enable or disable these permissions at will. Applications can work as they were originally written because any access they expect to have that was denied by the user can be faked. For example, say an application requires access to a database containing your personal contacts. Instead of blocking the application's access completely and requiring the application to correctly handle the case when accesses was denied, the OS can provide a dummy contacts DB instead. The application then proceeds as normal without knowing access was denied. Firewalling in both directions like SubgraphOS is doing is also essential.


> Any solution which requires rewriting existing software is impractical.

That is true, but there might be a point where we have languages, securer-os, frameworks or libraries, where it is simpler than hardening existing software, and I think it will look similar to typed functional programming.

IMHO sandboxing is popular because OS have failed - or maybe it was just out of scope as we may have underestimated the big range of attack vectors - in isolating processes that may be "evil". Sandboxing at the moment is extremely low-level, if you are paranoid about security you probably sandbox full Linux in some hardened kernel-thing (sel4 and similar approaches - I am not even sure if sel4 is used in production), and that is far from trivial, as you have to bridge the "userspace-style" Linux to the HW, which Linux wants to talk do. If you want to secure Linux from the HW itself (or even provide a better alternative) you have to rewrite large parts of it (drivers) and they are mostly very specific to the Linux API.

Now less paranoid people can use something like Docker/Linux-Containers/... and maybe a combination of libraries and distributions (like Subgraph, etc.), but Docker's isolation security record is controversial. Sure if setup correctly it is probably more secure than plain processes/JVMs (this is also controversial), but it just feels hacky and feels like an afterthought that might not be able to guarantee the security it advocates (I hope I am wrong here).

As a programmer you often know many constraints about your software, that are extremely hard to communicate (currently), so you skip stating this constraints and your software has attack vectors that might be avoidable in the first place.


The well-developed, well-security-researched, well-deployed application platform you're looking for is the web. You get exactly this sort of setup if you use WebGL: you interact with an API that expects to be called by unprivileged hostile applications, instead of with a library that helps your direct access to the graphics card driver. Every individual application lives in a separate protection domain (an HTTP origin), and communication between them is limited to message passing with the consent of both sites. The language itself avoids all assumptions of direct access to system resources.

Running everything in a web app is, admittedly, a fundamental change in the stack. But it's fortunately one where a lot of people have independently put work into making this happen. I do my most security-sensitive work on a Chromebook (using the SSH and mosh apps from the Chrome app store) for precisely this reason: it's the right security model, and it's available in my local computer store and works.


> You get exactly this sort of setup if you use WebGL

> I do my most security-sensitive work on a Chromebook

I would highly recommend you use a WebGL whitelist then. WebGL might have been designed with security in mind, but the OpenGL drivers which it, nevertheless, is a very thin wrapper around were, I can assure you, not written with security in mind. WebGL allows some surprisingly direct ways of manipulating hardware and there are a million attack vectors lurking in every WebGL implementation/OpenGL driver combination.


That's a good point. What else should I whitelist other than WebGL? (Is there a general hardening guide for an off-the-shelf, un-jailbroken Chromebook?)


Video, audio. Complex binary formats that require high performance programming where often security has taken a back seat.


You are right, that is the most secure platform at the moment to distribute graphical user interface programs, but I think it should go further.

E.g. I would go so far, that it shouldn't be possible by default for the server to send me a huge HTML/CSS/JS blob that does all kind of weird stuff (e.g. reporting to the host, mouse movement analysis, etc.).

I am probably in a minority with the following opinion, but I think a page shouldn't even have the ability to enforce a layout which in the end draws pixels on your screen. The web is a step forward and HTML is a good idea, but it is not used anymore in its intended form - it works very well for text distribution, but richer applications have to resort to JS.

Now if you disable JS you could in theory render it as you like, but this is far from trivial.

//edit:

Lets consider a bus company offering search to find offers that get you from A to B (i.e. a route planner, trip finder, ...).

This app shouldn't ship you random HTML/JS, but just the information you need to query its database, which is simply some GETing and POSTing of specified requests. When connecting to the app (going to https://trip-search.example.com) the host could disclose it self as an application having type `(From, Date, To, Date) -> Maybe TripList` or something like that (I think one gets the idea).

The web is great, but I think security should and must go further, I do not want run random Turing machines.


> The web is great, but I think security should and must go further, I do not want run random Turing machines.

Exactly. I want a document to read, not an application to execute. Sadly that battle is feeling more and more lost as time goes by.


I'm not sure I get why enforcing a layout is a problem from the point of view of application distribution - if nothing else, an app should be able to embed a text renderer and draw onto a <canvas> itself. (It's probably a terrible idea, but it should be able to, because a text renderer is just a program that takes in data and outputs some pixels, and that class of programs is useful.)

I do certainly agree that we need a way of distributing hyper-text content efficiently and in a standard way. Unfortunately the web seems to be moving away from that goal, and AMP isn't quite right and has its own problems.

I'm not sure how I feel about permissions by default. I think permission fatigue is definitely a thing, and for most apps I don't actually care about them exfiltrating mouse movements to the host, as long as they can only exfiltrate it to the one host. On the other hand, I'm a little weirded out that if I plug my piano into my Chromebook, JavaScript can receive and send MIDI events without any permission prompt.

EDIT to your edit: I'm totally okay with running random Turing machines, if their execution environment is constrained (which it is). The only resources that an arbitrary Turing-complete programming language can access are any external resources that it's specifically given an interface to, and time/memory/power consumption. The web platform is pretty good (though, yes, not perfect) at locking down the interfaces given to JS. So it's just a matter resource limits, which is fairly easy; I'm not always thrilled with how much CPU and battery life Twitter takes, for instance, but it's always killable. (Again, in theory.)

You can construct something that's capable of using plenty of memory or power out of any sufficiently powerful Turing-incomplete language. See, for instance, CSS. (I bet with the mechanism you're proposing, you can end up chaining server-side APIs in ways that let you DoS the client, because the server is always more powerful.) And given how easy it is to achieve Turing-completeness by mistake, it doesn't seem like a productive constraint.


> I'm not sure I get why enforcing a layout is a problem from the point of view of application distribution - if nothing else, an app should be able to embed a text renderer and draw onto a <canvas> itself.

Yeah, but in my opinion that is already a specific type of application, like e.g. a computer game, PDF viewer, plotting application.

It is totally different from e.g. an application like Wikipedia or a news page, that provides mostly text and images.

In the end there should just be more of the functionality on client side (rules how to render news pages, how to render wikipedia, etc.).


Serious question - what's the difference between that, and running all apps in their own chroot jails?

It seems like the goal of this app is to isolate things from the network, and from each other. A web app or chromebook method isolates from other apps, ok, fine, but not from the web. Seems more like jail in that sense.

Maybe I'm just misunderstanding.


That's a good question! The simple answer is that the web is about whitelisting, whereas a chroot jail is about blacklisting, and blacklisting never works. (Whitelisting, to be clear, also has no guarantee of working, but at least it's possible for it to work.)

When you jail a UNIX process, you start from a model that gives you full access to everything, and gradually revoke access until you're convinced it's secure. There are all sorts of things you might overlook. For instance, if it's just a chroot, there's no network isolation; an app can connect to a server listening on localhost, and it looks like it's coming from localhost. It can connect to a server on the local network, and it looks like it's coming from the host (which is bad if you have, e.g., a corporate network that lets you access interesting data without logging in, or a home router with a default admin password, or many similar cases).

And if you introduce a new mechanism, the chroot probably gives you access to it. For instance, if the chrooted app is able to access my X11 session, it has a ton of powers; it can keylog everything I do, for instance. Even if I mark it "untrusted" a la ssh -X, it has complete powers over everything else that's "untrusted". You could imagine an X11 designed differently, but X11 was designed for trusted apps. Another important case is system calls; a chrooted process has access to every system call, including every vulnerability that might be present. (On some OSes you can restrict what system calls the process can run, but it's still pretty coarse-grained.)

The web starts from the ability to render formatted text with links, which is very close to zero. Everything else is—at least in theory—added from there when safe. Images are safe. Playing audio is pretty safe. Recording audio is probably not safe without permission. (A typical desktop API won't have an easy way to allow one but not the other, and certainly won't have a permission prompt.) Rendering graphics is fine. Rendering 3D graphics is potentially fine, hence WebGL. Rendering graphics on top of someone else's tab is a definite no. Moving your window around or removing its borders is also a definite no. Becoming full-screen requires notifying the user of what just happened. (Again, a typical desktop API won't distinguish these cases and won't give you an easy way to exit full-screen.)

In particular, the web does restrict an app's ability to access the web. An app can freely access its own origin, but it cannot freely access other sites. If http://wiki.internal/ has sensitive data that doesn't require login, a site on the public web cannot retrieve data from there, without the consent of that site. (And the web has already implemented a pretty robust and involved way of handling cross-origin resource sharing.)

If you stick all these things into a desktop API, fantastic! But the web platform is already there, with a number of competing implementations that are all pretty good.


You might be interested in object-capability model[0] systems. It comes from the idea that in most memory-safe languages, before you can call a function or a method on an object, you first need to get a reference to it passed to you first. You can easily determine what code operates on an object by looking to where the object is passed. Now imagine if all types of IO interactions followed a similar system.

Right now, most languages have "ambient authorities", references with imbued authority (IO capabilities, etc) that can be obtained by any code anywhere in the program. In nodejs, any code can use the globally-available `require('fs')` call to get a reference to the filesystem module and then use it to make changes to the filesystem freely; the filesystem module is an ambient authority.

In a hypothetical object-capability version of nodejs, `require('fs')` would be invalid, and instead the application could have a single entry-point main function which receives the filesystem module as one of its parameters. In order to use functions that need to use the filesystem module, the main function would have to pass a reference to the filesystem module, or even a different object that follows the same interface. If it's known the function-to-be-called should only need to read files, then the function could be passed a wrapped version of the filesystem module that has all of its writing methods stubbed out for ones that throw errors instead. You can easily sandbox applications on a very granular level by passing them the minimum number of IO authority-imbued objects, and it's easy to review the security of code for looking where IO objects are passed around.

Currently I think Haskell (with unsafe code disabled) is the closest thing to an object-capability language that's popular right now. Some of the terminology doesn't match up -- code doesn't get a reference to an IO monad to do IO, instead it must return an IO monad which gets mixed into the IO monad returned by the main function to take effect -- but I think many benefits come out about the same. There's no ambient authorities. You can follow the control flow to isolate the parts of the code that do IO. I'm not sure if it's possible in Haskell to do the equivalent of passing a restricted capability so easily; can you call an IO-monad-returning function (that was written without any sandboxing in mind) in a way that it's not allowed to write files?

There are existing popular capability systems, but they're not as full as object-capability systems. They have object-capability-qualities, but only at the edges. A linux process can ask the OS to open a file and get a file handle, it can start a child process as another user or sandbox it in other ways so that the child is restricted from opening files itself, the parent can pass an individual file handle to the restricted child, etc. But outside of that specific file handling, the code of those processes isn't necessarily written in a very capability-style way. The child process may be written in C, it could put the file handle into a global variable, and any function inside its code could refer to that global variable. In an object-capability language, everything about the program's code follows the authority-comes-from-given-references object-capability style.

[0] https://en.wikipedia.org/wiki/Object-capability_model


We posted same solution about the same time! Haha. Yeah, this stuff is pretty easy in capabilities. They can be extended further with a high-level, systems language. I'd like to see something like SPARK or IDRIS with capability-security built-in along lines of E language. Just something that isn't built on Java.


This is also interesting. I expect there is some connection between this and the mentioned effect systems, at least their goals seem to overlap.


Does Subgraph isolate USB and network? The isolated serviceVMs for USB and network are in my opinion a very strong value proposition of Qubes.

Furthermore, is Subgraph supposed to be an OS for everyday use, like Qubes, or just for anonymous usage like Tails or Whonix? If its the former I don't understand why all traffic should be routed via Tor by default - it wouldn't make sense to route non-anonymous traffic (banking, personal mail, etc.) via Tor. It wouldn't be anonymous anyway and also because of the unnecessary risk of exposure to malicious exit nodes. In this sense I believe the Qubes approach with its optional WhonixVM is superior.

If Subgraph is supposed to be for anonymous usage I'd like to read more about what kind of threat model it is trying to address. I don't think there are any amnesic features like in Tails nor strong isolation between gateway and workstation to prevent IP leaks like in Whonix.


> Does Subgraph isolate USB and network? The isolated serviceVMs for USB and network are in my opinion a very strong value proposition of Qubes.

According to Joanna Rutkowska, developer of Qubes: "Unlike Qubes OS, Subgraph doesn't (cannot) isolate networking and USB stacks, or other devices and drivers."[1]

[1] https://secure-os.org/pipermail/desktops/2015-October/000002...


Thanks for that - it pretty much answers my question. In this case it seems that Qubes exposes less attack surface.


I'm from Subgraph and I disagree.

On Qubes OS the networking VM runs a standard Linux kernel with no special security hardening at all apart from the simple fact that it runs in a separate Xen VM. If an attacker is able to compromise NetVM, they may not have direct access to user data, but they have dangerous access to perform further attacks:

  - Attacks against hypervisor to break isolation
  - Side channel attacks against other Qubes VMs to steal cryptographic keys
  - Interception and tampering with networking traffic
  - Attacks against any internal network this Qubes OS computer connects to.
So if you assume that remote attacks against the Linux kernel networking stack are an important threat, the consequences of a successful attack even against Qubes are pretty bad.

Subgraph OS hardens the Linux kernel with grsecurity, which includes many defenses against exploitation which have historically prevented local exploitation of most security vulnerabilities against the kernel. Exploiting kernel vulnerabilities locally is so much easier, probably never less than an order of magnitude easier. It's so rare to reliably exploit kernel vulnerabilities remotely even against an unhardened kernel that teams present papers at top security conferences about a single exploit:

https://www.blackhat.com/presentations/bh-usa-07/Ortega/Whit...

I know it's contentious to say so, but I don't believe that anybody will ever remotely exploit a kernel vulnerability against a grsecurity hardened Linux kernel, especially since RAP was introduced:

https://grsecurity.net/rap_announce.php

The threat of remotely attacking the Linux kernel through the networking or USB stack was always low in my opinion, but as the threat approaches zero it raises some questions about how justifiable the system VMs are in Qubes OS considering the system complexity and usability impairment they introduce.


I agree with your comments about grsecurity making the kernel much more secure. However your comments about remote exploits and Qubes are somewhat contradictory. You claim that a remote kernel exploit is very rare/difficult, therefore the Qubes NetVM must be very difficult to attack because it runs no applications or services. It functions as a router and does essentially nothing else. By your own argument it would be very difficult to attack the NetVM. It is only the AppVMs or any others which run applications that are vulnerable, and if these are attacked, Qubes's design will likely prevent a permanent backdoor from being installed in that VM and make it difficult for the attacker to gain access to any of the other AppVMs.

I still think Subgraph looks promising and I look forward to your future work.


I'm answering a comment chain about how Subgraph OS does not 'isolate' the network or USB stacks which is frequently brought up as an important deficiency in comparison to Qubes OS. My point is that this isn't a significant advantage of Qubes because such attacks are rare and difficult, and because they're even harder to perform against Subgraph OS.

I wasn't talking about AppVMs at all, but you can of course persistently backdoor Qubes AppVMs in numerous ways by writing to the user home directory. In Subgraph OS we design our application sandboxes to prevent exactly this.


Thank you for your answer, I'll definitely look further into SubgraphOS and grsecurity. I nevertheless believe that the kind of attacks you describe, specially the one against hypervisor in NetVM to break isolation, are quite unlikely in Qubes.

Could you also answer my question about SubgraphOS main use case and threat model?

Is it mainly for anonymous and pseudonymous usage?

If it is designed mainly for everyday use (including non-anonymous use cases like banking, social media, personal/work email, etc.) as it seems to me I don't quite understand the design choice to enforce all traffic via Tor by default. That seems unnecessary as anonymity is not needed and even dangerous.


Yeah, we agree, actually. Tor probably won't even be the default. We are adding flexibility to network support right now. Soon you'll be able to have just cleartext SGOS, or be able to send sandboxed apps through different paths: one app might exit through a VPN, another through Tor only, another through i2p maybe, etc, enforced by the sandbox.


That sounds great!


> I don't think there are any amnesic features like in Tails nor strong isolation between gateway and workstation to prevent IP leaks like in Whonix.

Subgraph sandboxes run in a network namespace with no direct access to the network or ability to view any of the physical network interfaces on the system. There is no way for an attacker to send network traffic directly or to discover the real IP address of the system without breaking out of the sandbox.


They try to avoid saying it, but it's mostly a patched Linux.


Hi, I'm an SGOS dev. I don't know what you mean by "mostly a patched Linux", but here's what Subgraph OS is so far -- and it's a young project: we have a kernel patched with grsec/PaX/RAP, but we have also developed our own application sandbox framework (namespaces + limited fs + seccomp bpf whitelisting), app firewall, event monitoring subsystem, usb disable on desktop lock (based on grsec), etc. Here's a walkthrough of our sandbox framework:

https://github.com/subgraph/oz/wiki/Oz-Technical-Details


Hey, cool project! Any chance you could give a quick rundown of how this compares to Qubes? Like, the tradeoffs, etc.


You may find this talk between the Qubes, Subgraph and TAILS representatives helpful:

https://www.youtube.com/watch?v=Nol8kKoB-co

I believe Joanna from Qubes also set-up this forum for discussions on secure operating systems:

https://secure-os.org/

Joanna also talked a bit about the trade-offs between the two here:

https://secure-os.org/pipermail/desktops/2015-October/000002...

I believe initially there were some discussions to integrate Subgraph into Qubes as a TemplateVM (just like the Debian VM, Ubuntu VM, etc), but the Subgrapth guys thought Grsecurity wouldn't work well with Qubes OS. I think that situation has improved, and there is some progress in making Grsecurity work with TemplateVMs and AppVMs.

https://twitter.com/Phoul/status/801114260881424384

However, even if it does work, I'm not sure how excited the Subgraph guys are about making their OS "just" a Qubes OS TemplateVM. They may think that's the wrong business strategy for them as a company. I'm just saying this as someone watching from the outside. They may actually not believe that at all.

However, I did also notice the relationship between the two projects got a little colder, at least for a while, and in public, after Edward Snowden called Qubes his preferred secure OS.


It was me, at Subgraph, that setup the Secure Desktops mailing list and website. We hope to collaborate more with other projects in the future. There is already interest from other projects in things we've built for Subgraph.

As for Subgraph in Qubes, being a template OS, etc, maybe later? We haven't even had a real release yet and are still building. I wouldn't recommend it anyways unless all of the Qubes VMs have hardened kernels by default.


Having a subgraph TemplateVM will get easier with Qubes 4.0, as Qubes switches over to HVM (I think just HVM with PV drivers, PVH in Xen is not ready yet). grsecurity and PaX do not work with paravirtualization, which is pretty limiting in terms of memory management and such (It also opens up some vulnerabilities, which is why Qubes is switching).


Thanks for the interesting stuff.

Even if Snowden called out Qubes, you have to decide on your own security level which system is best for your needs.


I suspect that it doesn't at all seeing as how it is a Linux distribution with some nice features for security and privacy baked in by default.


Many believe that Qubes places an unreasonable amount of trust on Xen.

Read: https://www.qubes-os.org/doc/vm-sudo/

The issue isn't only with trusting Xen, but trusting it so much that it makes all other security features meaningless.


No updated iso since June. Any plans for an update soon?

Also, shouldn't you just use Wayland for the stable 1.0 release? Why even bother with X11 at this point?

Do you plan to support flatpaks as well?


The new ISO is coming very soon. We've just been busy with consulting we do to support the project and there were some issues with gpg2.

Wayland is one huge reason why we aren't even calling this "beta". Wayland is absolutely part of the plan. We are working on integration now.

Flatpaks: probably not. Different vision. Flatpak is an 'appstore' type model, not sure we will want that in Subgraph OS, but it's worth a deeper investigation than the thought I've given it so far. There are things in Flatpak that we can benefit from, such as the UI advantages of "Portals". We'll probably be adding support for it to Oz.


Good to hear you're considering it. It may be worth looking into appimages as well. They don't seem to focus as much on security, but perhaps their isolation is better? Flatpaks seem to share quite a bit with each other, and I worry it may create another X11-situation. Flatpaks may still be better overall, though, if they can also have good isolation.

I doubt you should even bother with snaps. They don't seem to be that well supported outside of Ubuntu, and I doubt they will ever be.


We use Xpra to do desktop isolation for now, by the way. It's similar to Qubes' display mechanism, but we didn't write it, and don't really like it as a security control. Just serves as PoC until we can jump to Walyand.

Therefore Subgraph OS isn't in the worst possible x11 situation, which is the default for every desktop Linux except I think the most recently released Fedora.

Re: iso / updates, we have rolling updates. Installed users are kept current if they install the OS and regularly apply updates and do dist-upgrades.


Hi! Why did you choose the name Subgraph OS?


It's named for their company, which does other things too.


Yup. Subgraph is a nearly 7 year old open source software company. We wrote a web scanner (Vega) that's sadly neglected, though still used regularly by thousands of users. We also do consulting, like pentesting, etc.

The name was inspired by work I was following at the time (10 years ago?) by Halvar Flake etc, on applying graph theory methods to reverse engineering / runtime analysis.


It's catchy, sounds technical, and non-technical people can still spell it. Great name. :)


It is a precise technical term, and using it for a company name is unsettling to a graph theorist. But, it's probably cool for almost everyone else.


When the domain was registered by me and idea originally hatched, the vision was to have the company focused on reverse engineering and the application to it of ideas from graph theory. But things change. Name stuck.


Linux = kernel.


After reading the article, and reading replies to you, I still have to guess whether this is a Linux kernel or something else. And I still don't understand why they don't mention this on their site. The talk of "kernel with certain patches" has me guessing it is indeed Linux.


They are practically screaming grsec/PaX from the rooftop. It's even in the diagram! What else could they possibly be?


imho the qubes approach is more viable and exposes far less attack surface. Qubes is also, contrary to it's reputation, a very usable OS (with KDE in dom0, at least).


Subgraph does lots of things Qubes doesn't, and this will only increase over time. For example: an experimental Subgraph OS feature[1] is to, by mandatory sandbox policy, prevent a specific application from connecting to anything except TLS endpoints, or specific TLS endpoints while adding certificate pinning outside of an application and performing extra-app validation. Could be useful over Tor or public wi-fi, right? Qubes is not going to build this, yet I am running a prototype of it on my SGOS dev laptop.

You can compare the sandbox technologies: hypervisor vs. Linux kernel containment facilities, but we are doing a lot more than that. There's no doubt that there will be many that want to run Subgraph or parts of Subgraph inside of Qubes for this reason, though we believe Qubes needs strong exploit mitigation throughout, in every VM, and I think wouldn't recommend it until that is the default.

1. Screenshots of Oz' coming TLS Guard, which proxies the TLS handshake to ensure correct TLS session & enforce other policy req's:

https://twitter.com/attractr/status/783013051335319553

https://twitter.com/attractr/status/783521883715203073

https://twitter.com/attractr/status/786235879111090176

etc

(edited, formatting)


" Qubes is not going to build this, yet I am running a prototype of it on my SGOS dev laptop."

You can do that in Qubes or the architecturally-superior GenodeOS. Genode is FOSS so nothing stops you. Any programs computing with secrets can run in an isolated partition to prevent leaks. Similar with protecting integrity of backups like in some partitioned filesystems. And you get the benefits of subgraph on the inside.


Sorry for being OT but do you mind explaining a bit what exactly does Genode do/is? I read about it in their web page but I'm not sure I understand the difference between "an OS" and "an OS framework".

It seems that they are trying to create an architecture with all components compartmentalized, but it says it can run Linux and Windows so I'm guessing it's virtualizing something at some point.

Also, they say they have a reference implementation of the architecture, so I guess the real work is defining that architecture and making an API compatible with what modern OS's do so later on they can jump on board and make it Genode compatible?

It sounds very interesting but it feels like I'm misunderstanding a lot and thus hitting a wall here due to lack of knowledge so any pointers are appreciated :)


There's a lot of conceptual similarity to Nizza architecture that's explained thoroughly in this paper:

https://os.inf.tu-dresden.de/papers_ps/nizza.pdf

From there, Genode is a different take on the same concept even using some of the same components (eg Nitpicker GUI). In both, there are various components integrated that might be used in other projects. A specific set of components together makes up a desktop. A different set might make an appliance. A different set a TV box. Much like how you build your Linux distros with packages and source files but these components can run on the microkernel communicating with each other and operating within their resource-management scheme. That scheme is hierarchical where each process spawns others with control of their memory or resources. Includes ways to let them communicate in such a way that your attack surface is mostly restricted to that composition.

Feske, the designer, gives specifics here:

http://www.slideshare.net/sartakov/genode-os-framework

Nitpicker by itself is worth looking at if you're unfamiliar with trusted paths. Too few systems have a good one.

https://os.inf.tu-dresden.de/papers_ps/feske-nitpicker.pdf


Awesome! thanks a mil.

I'll read up the linked resources.


No, Qubes hasn't written the TLS client handshake proxy to enforce the policy. Out of scope. That's what I meant, and it's just one example of the things above the level of "Qubes" or "Oz" plumbing that makes Subgraph OS what it is.


> imho the qubes approach is more viable and exposes far less attack surface.

I don't know what you base that opinion on since it's not an easy comparison to reason about. One metric you could use would be actual vulnerabilities. In the last year there have been several hypervisor escape vulnerabilites that compromised Qubes OS VM isolation completely, most (all?) of which have been present in Xen for the entire lifetime of the Qubes project.

By contrast during the same period only one Linux kernel vulnerability (DirtyCow) affected Subgraph sandboxed applications, and it would only have been exploitable using techniques which have not been disclosed in any public exploit so far.


I honestly find the XFCE desktop more usable.


I greatly prefer a desktop that has searchable menus and decent Hi DPI support (although the older version in F23/Q dom0 isn't quite plugnplay). Personally I think that KDE also looks better. Especially XFCEs window decorations are just so... 90s "design".


Hello Joanna.



What else would you expect it to be?


Maybe a real secure kernel, such as SeL4 or LynxOS.


The tipoff that it's not L4 is that it's a desktop OS that runs applications.


You can do a desktop on a microkernel that runs Linux in user-mode or with hypervisor support. Critical stuff stays outside directly on microkernel. It's what every vendor of separation kernels does. Two examples from commercial and FOSS that's similarly alpha:

Sirrix TrustedDesktop on Turaya:

https://www.sirrix.com/content/pages/trusteddesktop_en.htm

Turaya's architecture:

http://www.perseus-os.org/content/pages/Overview.htm

FOSS alternative that they already use to develop itself:

https://genode.org/


Have you use any of those commercial offerings? I've honestly never heard of them before. Can i, as a regular consumer go purchase one of those operating systems and use it on my laptop?


You can go and use Genode right now. There's no installer (to my knowledge) -- you'll have to build the OS by hand. If the area of secure OSes or capability-based OSes are interesting to you, Genode is the best playground for that. The DROPS/Dresden folks have been working in this area for a long time.

Genode is largely kernel agnostic, being an "Operating System Framework" -- you can run it on Linux, variants of L4, seL4, Muen, and more.


You probably have to buy hardware from them if the drivers are on the microkernels because I doubt they're doing many ports. I haven't used the product as I had custom stuff. Here's a video of the academic prototypes that both the commercial stuff and Genode drew from if you're wondering about performance. That's on a Core Duo 2 @ 1.6GHz. The L4Linux VM's were fast.

https://www.youtube.com/watch?v=x9IwtY9gqCg


I rather like the graphic with this post. Is that based on pixel art, programmatically combined to resemble orthographic projection, or is it generated by WebGL? (Using one of the available blocks libraries in Javascript.)


It was drawn by hand using a vector illustrations tool, based on a sketch that I had made. We think it is very cool and will produce more conceptual illustration in this style.

The artist goes by Sephy Ka: http://www.sephyka.com/box-stories/


Ill give it a shot. Ive been feeling quite vulnerable on 16.04 due to the absurd amount of unfixed bugs. I have a couple of questions

- its mentioned that it does not have access to documents and downloads within the user folder. When it wants/needs read access, how am I told?

- if it doesnt have access to these folders, does it only write to its own subset?

- is it possible to make my home downloads folder an aggregate of the application downloads?

- when uninstalling/purging, since its sandboxed it deletes all of the content or keeps it? Can I force removal as well?

- how does subgraph deal with shared services/folders/info? Can I share a service with another user? Can I share the network setting modifications with other users?

- how can I prevent an application from using the network without my knowledge?

- are the tools like nethogs/top for subgraph that can take advantage of the compartments to show a more realistic view of whats going on?

I think this has a lot of potential!


These are great questions. We have a Gnome shell plug-in to move files into sandboxes while an application is running. Certain applications also have shared directories (e.g. "Downloads/TorBrowser", "Documents/LibreOffice"). This is a UX work in progress though, neither of these are adequate, though together they're workable.

re: Applications and network access: we have an application firewall, unique to Linux-based OSs. It's basically Little Snitch for Linux. There is a screenshot here:

Keep in mind that the project is very young. We are just getting started, tbh. With questions like these you should idle in our IRC channel where we talk about all of this stuff: OFTC/#subgraph.


Will do! Im very excited for this. Containerization and safety is a very important problem to me. Im not particularly interested in running a docker instance or a vm just to use an application. And if I do, id prefer it being automated.


I come late to the party but I installed the ISO on virtualbox. It seems after boot, I can't apt update or anything network related.

I tried disabling the firewall too, to no avail.

I see that /etc/resolv.conf is only having nameserver 127.0.0.1 I guess that's ok (resolv made through Tor maybe ?) but I wonder how to activate the network :/

Is there a way to discuss things related to Subgraph ?

thanks


Looks interesting. I'm looking for a more secure minimal OS, for use with backend services. Would it make sense to use it as a server OS, or is it primarily for desktop use?

Also, is there a docker image that is ready to go? That would be immensely useful.


It's a desktop OS, and the hardened kernel is an important part of the project (i.e. no docker image).


"Adversary-resistant" is an extremely bold claim. While the architecture does look promising, and (at least intuitively) reasonably-designed, I think it's a bit too soon to make a call about adversary resistance.


The word "resistant" is qualitative so they are not really making a bold claim.


Yes, just like water-resistant vs water-proof. Subgraph OS is like hacking-resistant, not hacking-proof.


I need to know who/how they made the graphic image. I love it.



For a second I thought this was about a new linux-distro for servers that featured outage resilient services and such.

Oh well, this is good enough I 'spose.


Care to put out a vm image, invite folks to hack it?

That is a lot more interesting.


You can install/run the downloadable image in a VM. Lots of people do. We test with VMWare Fusion/kvm/Virtual Box.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: