I have felt spoiled by GNU Guix honestly. It has documentation that is comparable to Gentoo or Arch, power over your system that generally matches Gentoo (though a system-wide equivalent of USE flags is not yet available, individual packages can be configured), the ease of maintenance of a system like Fedora Silverblue, and one of the most approachable communities I have ever dealt with. I highly recommend playing around with it :)
Guix started as a fork of Nix, but I believe all original Nix code has been replaced with Guix specific code at this point. Core functionality is pretty comparable between the two of them. Two differences I can tell you immediately is that GNU Guix has a built in user home configuration management system[0], and Nix depends on an external module to do the same[1]. NixOS, like most distros uses systemd for it's init system. GNU Guix uses Shepherd[2]. Honestly it hasn't really been much of a point of note in my experience, but it's probably worth mentioning as a difference.
Regarding Free vs non-Free packages: this can be a pain point if your system requires non-Free packages to function on GNU Guix. I (like most other Guix users I would guess) use the nonguix[3] channel for additional packages. One of my systems requires the baseline Linux kernel for propriety Ethernet drivers (included in Linux because of GPLv2 :P), so I had to create a custom installation ISO with the baseline kernel. This probably sounds horrifying, but creating a custom installation ISO is actually very easy in Guix (and probably Nix as well). It's the only distro I've ever done it for.
But between the main Guix channel, nonguix, and Flatpak, it's very uncommon that I cannot find what I want. According to repology[4], Guix is the #5 repo for most packages. And the few packages that I have wanted that were missing, I have actually been able to contribute package definitions for upstream to Guix, since the process is very easy and well documented (also the only distro I have ever done that for).
An interesting advantage of Guix over Nix is having better options for runtime isolation. Nix is not well developed in this area yet, although one can always use bwrap or AppArmor to implement something ad hoc.
Besides, Guix uses Scheme everywhere, whereas Nix is a mix of the Nix language and lots of shell code. The Nix language is a pretty elegant lazy functional language, and yields very compact and clean code. But if you want to do complex things, Scheme might be easier.
Although it doesn't seem have much in the way of runtime isolation of Guix source files themselves. From what I can tell it runs the Guile scripts that evaluate to manifests and such as your own user, with all of the authority associated with that.
So although Guix provides good mechanisms for isolating applications at runtime, you do have to put a lot of trust in the channels you're using.
(This is something I would like to see improved as a Guix user.)
Thanks. So there are two concepts where isolation is mentioned, the --pure and the --container options. But still the documentation is very light on details and doesn't really answer what kind of (if any) security boundary or sandbox is provided by the container feature.
How might it compare to eg using nsjail[1] or firecracker [2] or flatpak's sandbox[3]? These could be also good benchmarks for the documentation.
Good point. My main distro is NixOS, so I am not too familiar with Guix. After a quick glance at the code, Guix seems to be similar to bwrap et al. as it uses kernel namespaces for sandboxing [1].
I think the pure option is not really for (security) sandboxing, but rather for making sure your programs do not use any dependencies that are not explicitly declared, i.e. to ensure referential transparency.
This is not actually correct. Guix only ever used a copy of the nix-daemon, because there's no point in reinventing the derivation format or the low-level implementation of functional package management; no other part of nix/nixpkgs has been used to make Guix.
> What are the technical merits of Guix compared to Nix?
It doesn't boot on non-libre hardware.
In all seriousness, it uses Lisp. That is a _substantial_ advantage over Nix - which uses a home-grown language. As you can see from the front page today, there is roughly a million Nix-wrappers addressing the issue that people don't want to learn nixlang.
but I'm with you on "why in the holy hell reinvent the wheel". I need to dig back into Guix and see if it supports moving the store out of /gnu since on Cupertino's OS one cannot write to / without a bunch of stupidity <https://nixos.org/manual/nix/stable/installation/installing-...>
Lisp usually means "a language in the Lisp family", more than it means "Common Lisp specifically", these days. I don't have a strong opinion on whether that's good or bad, leaning slightly in the direction of good. Just sharing the observation.
Lisp usually means a Lisp which identifies as such: Common Lisp, Emacs Lisp, ISLisp (which is an ISO Standard), Emacs Lisp, AutoLisp, Interlisp, ...
Languages like Scheme have their own standards (RNRS, IEEE Scheme, SRFI, ...).
It's perfectly fine to make a difference between Lisp and Scheme, for example. In the Scheme literature "Lisp" often means the earlier&older language before Scheme. A language which emphasizes symbols, property lists on symbols, lists, imperative programming, several namespaces, dynamic binding, ... Common Lisp, even though it is younger than Scheme, preserves the Lisp features from before Scheme. Other Lisp languages, like Emacs Lisp and ISLisp, are also backwards compatible.
There is also an idea of a wider family of Lisp-inspired languages, but that has very little practical meaning because they are often very different languages: Logo, Dylan, Racket, Clojure, newLisp, Hy, LispE, ... JavaScript, R, ...
If Scheme is a Lisp dialect, then it is a Lisp. This is commonly how the word is used. You may prefer the more narrow and prescriptive definition you've provided, that's fine I suppose, but as a matter of descriptive linguistics, it is common to refer to Scheme as such as well.
The difference has not been "provided by me", the discussion around Lisp family vs. dialects goes on for several decades. We had the same discussions on comp.lang.lisp decades ago, while Scheme users then had their own comp.lang.scheme .
Lisp dialect? Sure, long ago. Now Scheme is its own language. It's like the uncle who now no longer lives in Germany and speaks no longer German, but lives in England where they speak a Germanic Language, called English, which again has its own dialects. https://en.wikipedia.org/wiki/Germanic_languages English has its roots in old Common Germanic (no joke, see https://en.wikipedia.org/wiki/Proto-Germanic_language). Here we have the difference between Germanic and German. It's basically the same between the Lisp family ("Lispic") and Lisp.
There are many other places where authors in the Scheme literature have emphasized the difference between (original) Lisp and Scheme. They tell for example how things are done in Scheme vs. Lisp.
The SICP foreword says "Lisp changes. The Scheme dialect used in this text has evolved from the original Lisp and differs from the latter in several important ways, including static scoping for variable binding and permitting functions to yield functions as values."
If there are enough of these changes then Scheme begins to be its own language family.
Typical user question: "should I learn Lisp or Scheme?", answer from Kent: "learn both".
then he explains:
"Others divide up this space differently than I, but for most purposes, I personally regard them as distinct langauges, not mere dialectal variations, although plainly they are from what I would call the same
language family."
That's my view also.
The difference once has been described like this:
Schemer: "Buddha is small, clean, and serious."
Lispnik: "Buddha is big, has hairy armpits, and laughs."
But the Common Lisp dialect also "has evolved from the original Lisp and differs from the latter in several important ways, including static scoping for variable binding and permitting functions to yield functions as values".
But with an effort to be backwarts (sic!) compatible. Code moved from Maclisp to ZetaLisp to Common Lisp.
One can write (mapcar (function (lambda (...) ...)) ...), but the (dolist (...) ...) is still there. Even PROG is still there.
For Scheme code needed to be rewritten, especially since Scheme changed more: syntax, different macro system, different control flow, different control flow abstractions, less built-in features, old features added back by libraries, ...
All these wrappers are there because nixpkgs doesn't have non-awkward ways to deal with the stuff people deal with often. Nix the language is probably the least of the issues, it's maybe a few % of the chaotic Nix ecosystem. Most of the logic (that desperately needs proper documentation) is in nixpkgs and tooling.
There's also a social element here : communities around software matter, and I for one (being a EU developer, for reasons of professional ethics) cannot justify any more associating myself with the kind of developer that still thinks it's acceptable to use Github (especially for something as fundamental as Nix(OS) - and I'm pretty sad about Rust too) (at this point developers had enough time to migrate even the most complicated projects off of it).
I used Nix for a few months several years ago, so their documentation is probably better now than I was when I used it, but yeah, I find the Guix documentation to be much more pleasant to use than the Nix documentation was when I used it.
This is partially because it uses an existing language, rather than a home grown language. Guix was also my entry point for learning Lisp, and the abundance of learning material for the language will probably always dwarf that of Nixs homegrown language. I actually watched all of the SICP lectures; I found that to be an incredible resource to get my footing.
It's also important to note that Guixs documentation is readily available on the system using the GNU info utility, which if you're not familiar with, is like man but on steroids. If you use Kagi, I also made this lens[0] that allows one to search through the web version of the documentation, the mailing list archives, and the IRC logs all at once using !guix.
Having spoken to a some people involved with the project, my understanding is that there's a significant schism between factions "Flakes are unfinished software, and our documentation should not focus on them" and "Flakes are what the majority of the userbase is using, and we should document them". I hope this will eventually get better if Flakes are ever stabilized.
This is good take on the topic focusing on intrinsic identifiers. Guix has their own approach but the article also mentions SWHID[1] and OmniBOR[2] which are both contenders to fill that space.
SWHID is on its way to become an ISO standard[3]. OmniBOR is quite new but a highly interesting approach.
Because it does not directly solve any problems for Guix, it's fair enough that they do not talk about much about extrinsic identifiers except to rightfully dismiss CPE as "showing its limits", which is put mildly.
I think there is need for an extrinsic identifier standard beyond CPEs (in addition to intrinsic identifiers).
While intrinsic identifiers allow us to pinpoint artifacts exactly sometimes this is not what we want. Sometimes we deliberately want to talk about sets of artifacts together, like variants or versions of an artifact. One contender would be purl[4] (as in package URL and not the older persistent uniform resource locator).
"we believe binary artifacts should instead be treated as the result of a computational process; it is that process that needs to be fully captured to support independent verification of the source/binary correspondence. "
So binary artifacts should verifiably come from source code, which can be accomplished by signing it with a signing mechanism specified in the source code?
Since this is coming from the GNU folk, they naturally have their inclinations towards open-source software, but I'd argue (and they probably would too) that reproducibility is a much stronger invariant than just code signing.
Bootstrapping everything from a tiny first stage compiler and getting bit-identical compiled outputs is a much higher level of confidence than PKI offers, as PKI can be cracked, stolen, made to sign things it shouldn't, etc. Even if the signature is legit, it doesn't help you against insider risk (e.g. internally added backdoors) on closed source software.
These are all things governments (should probably) care about.
Well, Guix sidesteps that problem by (rightly) pointing out that Intel microcode updates are non-free software, and thus aren't included in the system. If one wants those updates, they have to do it themselves, usually by using a software channel that provides ways to use non-free software on their system, which means that the user makes a conscious choice to use non-free stuff instead of it being handed from up high.
It might not be a satisfying answer, but oh well. One can complain at Intel about it.