I loved the idea (git everything!) until I read this:
"How do I remove a package and all of its dependencies? [...] The package manager does not do recursive dependency removal on removal of a package. This error-prone automation will not be added to the package manager. "
I made my own 'autoremove' in about 15 minutes for kiss. Its up in my repo https://github.com/jedahan/kiss-repo , and relies on having a package with all your system targets (in my case, named after my hostname) as dependencies:
while kiss orphans | grep -qv $HOSTNAME; do
kiss remove `kiss orphans | grep -v $HOSTNAME | tr '\n' ' '`
done
This was much easier to implement in kiss than other package manager extensions in previous distros I've maintained packges for (gentoo, exherbo, arch, debian).
Yuck. The only way out of dependency hell is to disallow it by allowing multiple versions/different configurations of side-by-side packages and GCing non-leaf packages. This pattern of "only one global package for everything" is failure. Too often, naïve designers will just cut things because they just don't understand them or copy & paste because they don't understand what failures exist and what solutions do/alternatives could exist.
Habitat (hab), nix, and others IIRC do SxS package mgmt.
Everything Should Be Made as Simple as Possible, But Not Simpler - possibly paraphrasing Einstein
> The only way out of dependency hell is to disallow it by allowing multiple versions/different configurations of side-by-side packages (...)
Another elegant way, for a more civilized time, is to simply disallow dependencies. If none of your packages has any dependencies, then no dependency hell is ever possible. This is possible and easy with static executables (for binary packages), and by embedding the interpreter of packages written in scripting languages.
Static binaries still have dependencies, they just embed them all into one file. That's not generically possible - as the most trivial example, consider a bash script which runs other executables. If you're trying to solve this problem, you need to actually solve it for all real cases; otherwise your solution breaks down and you're back where you started, in dependency hell.
App::FatPacker will turn your (pure perl) script and its dependencies into a single file for deployment.
App::staticperl will build a static binary with the perl+C dependencies built in.
A lot of the time, it's better to use a plenv + Carton, mind, but if you want a single file for deployment, that's a solved problem in perl land too and has been for years.
This might seem like a limitation. But you can provide a base system with a toolchain & xorg with known dependencies and a limited set of packages (like a BSD base system). Then for every big and complex app clone that base system and build the app in a chroot/sandbox or container. That way you get the app and dont pollute the base system and uninstalling is just rm -rf
On every distro uninstalling packages never really reverts the system to its previous state, always leaves some junk behind.
By and large yes. However, even on NixOS there can be various leftovers in the form of state files in /var and configuration files in /etc (outside the majority, which are managed through Nix). Unless you nuke most of the root filesystem on every boot [1].
That's too much of a simplification. In NixOS you still have global configuration files, such as /etc/fstab. However, they are symlinks to Nix store paths that are associated with the current generation.
However, NixOS installations can also end up accumulating configuration and state files that stick around when they are not defined declaratively. E.g. if you enable ssh, host keys are generated in /etc/ssh. Even if you disable ssh, these files will stick around.
You can avoid such accidental state [1], since NixOS will happily boot from a filesystem with just /boot and /nix and reconstruct the rest upon system activation. But it is quite a bit of work, since you need to manually specify what state you want to preserve (e.g. SSH host key files). Also, it currently does not work nicely with some systemd units that barf out if you make /var/run entries symlinks.
I’m just getting started with Nix, with the explicit goal of making my entire system defined in a private , remote git repo. I want to be able to rapidly re-provision my entire user environment on a new machine, including applications, preferences, etc.
For the moment I’m doing this on MacOS with the nix package manager. I’ll eventually move to NixOS. I tried to run NixOS in VirtualBox, but couldn’t get screen resizing to work despite using the official ISO which is supposed to have the appropriate extensions installed.
My current hurdle is exactly the topic of this thread: non-binary configs. For example, what am I supposed to do with .zprofile? I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations. I’m concerned that might cause problems with MacOS system ZSH. I can probably fix that with a custom login shell?
Anyway it’s fun, but complicated and diversely documented. Gonna take a while to sort it all out.
My current hurdle is exactly the topic of this thread: non-binary configs. For example, what am I supposed to do with .zprofile? I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations. I’m concerned that might cause problems with MacOS system ZSH. I can probably fix that with a custom login shell?
I use both NixOS and macOS. You can take two routes: 1. you can continue using Apple's /bin/zsh and just use a .zprofile generated using Nix (e.g. home-manager). Generally, the differences between zsh versions are not that large and it just works. This is what I have been doing with my Mac. 2. You could change your shell, either system-wide, or just for Terminal.app to ~/.nix-profile/bin/zsh.
I’ll eventually move to NixOS. I tried to run NixOS in VirtualBox, but couldn’t get screen resizing to work despite using the official ISO which is supposed to have the appropriate extensions installed.
If you have some leftover hardware, try it! NixOS is a different experience altogether and cannot be paralleled by Nix on macOS or a Linux distribution. Being able to declaratively define your whole system is insanely cool and powerful. Fully reproducible machines. Also, you can try out stuff without any harm. Just switch back to the previous working generation (or try the configuration in a VM with nixos-rebuild build-vm) if you are not happy.
> I think I’m supposed to write a custom Nix derivation for ZSH that includes any and all customizations.
Nix supports lots of approaches, with a varying degree of "buy in". I wouldn't say you're "supposed" to do one thing or another, although some things would definitely be non-Pareto-optimal (i.e. you could achieve all the same benefits with fewer downsides).
In the case of .zprofile, I would consider any of the following to be reasonable:
- A normal config file sitting on your machine, edited as needed, not version controlled.
- A symlink to a git repo of configs/dotfiles (this is what I do)
- Directly version-controlling your home dir in some way
- Writing the config content as a string in a Nix file, and having Nix put it in place upon rebuild (I do this with files in /etc)
- Having a Nix 'activation script' which copies/symlinks config files into place (this is what I do, to automate symlinking things to my dotfiles repo)
- Wrapping/overriding the package such that it always loads the desired config (e.g. by replacing the binary with a wrapper which prepends a "use this config" flag).
--
The following has nothing to do with your question, but I felt like ranting about a tangentially-related topic; it's not directed at you ;)
I often see "extremism" when Nix is brought up; e.g. if someone wants help managing non-Python dependencies of their Python app, and someone recommends trying Nix, it's often dismissed along the lines of "I don't have time to throw away my whole setup and start over with the Nix way of doing things, even if were better". The thing is, using Nix can actually be as simple as:
(import <nixpkgs> {}).runCommand "my-app" {} ''
Put any arbitrary bash commands here
''
I often treat Nix like "Make, but with snapshots". Nix 2.0 turned on 'sandboxing' by default, but if you turn that off you can do what you like: add '/usr/bin' to the PATH, 'wget' some arbitrary binaries into '/var', etc. You won't get the benefits of deterministic builds, rollbacks, concurrent versions, etc. but those don't matter if the prior method didn't have them either. Projects like Nixpkgs, and the experimental things people write about on their blogs, aren't the only way to do things; you don't have to throw the baby out with the bathwater.
I'd say that's either 1) outside the scope of the package manager, or 2) mostly-solvable as long as your package manager allows you to specify "extra files created by the application that I do not install but I will want to uninstall."
That's also not what's being asked for here. The basic request is this: track which packages were manually vs automatically installed, and give the user the ability to remove automatically-installed orphans whose manually-installed reverse-dependencies are no longer installed. This is what APT does and it works fine 99% of the time.
Isn't that a feature? Should my settings be cleared if I uninstall a program and reinstall it later? Should all my libre office documents disappear when I uninstall libre office?
Instead, the workflow is to remove the single package and then look
at the output of the 'kiss-orphans' command to see what can now be
removed. This command will list all packages which have no
relationship with other packages, otherwise known as orphans.
This list may include Firefox and other "end" software so a brain
is required when parsing the list. You'll come to learn the
relationships between packages and their dependencies and this will
eventually become effortless.
> The distribution targets only the x86-64 architecture and the English language.
Deal breaker for me. But the author should be commended for stating it up front. Far to often, only the benefits are presented but none of the flaws so you have to investigate new tech deeply to find out if it is right for you.
Sorry, but to use the port for the PineBook pro on a RPI4, NanoPI, Odroid, Beaglebone, whichever ARM else YOU are the porter. Which may be simpler, because K1SS, but it isn't readily available...
Well, I have Radxa Rock(RK3188) and no distro supports it. With ARM "YOU are the porter" is right for many other distros, except some very popular boards. Even O-Droid's support vary between the boards.
Seriously, though, I'm curious how many people running Linux for dev or admin purposes really care about non-English i18n. Even just 'en_US' seems to cause regular problems for me.
My 2¢. (1) When I wanted simplicity and rolling releases / bleeding edge, I found Void Linux, which is indeed very minimal. It does proper package management, though, and can remove orphans. (2) An even more minimal Linux "distro" based on git and tup was discussed back in the day: https://news.ycombinator.com/item?id=16015105
I applaud your efforts, and am happy to see more options available. I really hope this takes off. Your choice to not include systemd is a HUGE incentive for me to try this out.
1 super-small nit to pick: the sub-title "..a focus on less is more.."
I took that way too literally and thought it was referencing the tool `less` and was expecting it to have used less as a wrapper for everything. (call it a brain fart, and lack of caffeine)
I FUCKING LOVE YOUR WEB PAGE.
Thank-you so fucking much for reminding people and showing that simple is beautiful. Everything is clear, obvious and legible.
> I wish there was a good standard alternative to bash (sh in this case) for shell programming.
I would say Tcl!
Tcl is one of the most underrated programming language that everybody probably has installed. It has existed for as long as ksh and even long before bash. It is simple and powerful enough that first version of Redis were written in it[1].
MacPorts is written in Tcl (including its ports DSL) and I enjoyed every moment working with it.
Agreed 100% with this. TCL is a super cool language IMO. It's mature, unixy, and has minimal external dependencies. The ecosystem is stable and slow moving. I would encourage anybody to try it out!
The syntax takes some adjustment because it's not lval/rval-based, which is maybe why it's not so widely used now. But if you want a language that's more robust than shell and more minimal than Python, consider TCL for your next project.
Could you explain more re: bugs? Dylan Araps sort of specialises in well-written (ba)sh and iirc uses shellcheck meticulously. I ask not to be combative in case you're just making a casual joke - more I'd like understand how bugs might creep in to (ba)sh code - even if say, one follows shellcheck and is as competent as Dylan.
I saw that after posting. It's not a critic of Dylan's work, more a critic of how difficult it is to write good bash code.
Your comment on shellcheck and Dylan expertise just confirm it. It's like C, you could write safe C but it's very difficult.
I wish there was a language with the ubiquity of Bash but modern and safe.
»I wish there was a language with the ubiquity of Bash but modern and safe«
I agree. But reaching such a point would be hard. Not impossible, but hard.
I think the main problem would be to gain traction for an alternative. Because if one questions the status quo, then there are tons of alternative paths to go by. But which of them are viable…? It’s not easy to pick a winner in this kind of situtation. And most people, even in tech, would not want to spend hours on trying out/developing a new shell that won’t gain any significant traction.
The Bourne shell was developed in the mid 70s. There were not many other scripting languages around then, so Bourne shell became a major player. They could probably have used Lisp if they wanted to, but AFAIK Lisp was not widespread within the Unix world back then.
These days we have many other models for interpreted languages. Ruby, Python, Perl, TCL, Scheme, Clojure, etc etc.
Let’s say someone wrote a Python-inspired shell that aimed to replace sh/Bash. Now you need to gain traction in order to build up a useful and sound ecosystem that has the ability to replace all those shell scripts out there. But how would you convince the Ruby fans or Scheme fans to use that? Therein lies a big challenge.
Getting people to continue using sh is not as hard, since it at least is standard, despite its shortcomings.
It works and you can try it now. The downside is that the implementation is more like a slow prototype, but that's being addressed, and the release I made today has stats about the C++ version (blog post forthcoming).
Ah I'm with you, thanks for the explanation - you're quite right. I've done a fair bit of bash and still I fully expect parts of the syntax to escape my memory. A modern and safe shell would be oil [0], ubiquity is WIP.
Honestly, PHP is a decent scripting language. It's pretty fast with text processing and you can choose to play fast-and-loose or go in a more type-checked direction.
It's basically my least favorite language for "real" programs, but I think it could find a niche for scripting in the middle of the spectrum between (ba)sh and python.
For quite some time I've been using https://github.com/tarruda/python-ush on personal scripts and it is working very well. If you like python, I recommend giving it a try :)
One thing that irks me is the verbosity of simply renaming file extensions of all files in a directory.
On Windows: ren ∗.js ∗.mjs
On Unix: for x in ∗.js; do mv "$x" "${x%.js}.mjs"; done
Feels like shells should have one line list comprehensions without the obtruse do/done. Are there any shell enhancements like zsh or the like, that make simple list operations less verbose?
I know about that utility -- still not shell though. The fact that you need a utility to make such a simple task readable is pointing to the shell language lacking.
By your own logic, your own UNIX solution to the file renaming problem is "not shell" since it uses the mv "utility". Or do you mean rename is not POSIX? The shell language certainly is lacking for almost anything beyond its original purpose of gluing together pipelines/graphs of UNIX commands; it's a largely unintentional DSL from the time before many modern notions of language ergonomics had become established. The point is, there is an array of solutions to reduce the nuisance of having to deal with the language equivalent of a stone knife such as programs like rename or actual general-purpose scripting languages. Sure, you can cut most things with a stone knife with enough time and effort but why would you if you have a bunch of sharp, metal knives within easy reach?
My point is that mv is a compiled C program, not a shell builtin or piece of shell syntax, that you may nonetheless call in a shell script to extend the functionality of the shell. rename is another such compiled C program. To say that it's awkward to batch rename files with mv may be true, but that's a deficiency of an external C program being used beyond its original purpose. Since we're using external programs anyway, you can just use one that's actually designed explicitly for the purpose renaming files.
Mv can be replaced with the rename syscall and the lack of versatility of shell is still demonstrated. Point taken though, pre packaged selection of utilities is arbitrary.
This is brilliant. I also love the testimonials here: https://k1ss.org/testimonials because they pretty much capture the entire Linux community of developers.
But if you contextualize it a bit differently I think you might find it is super valuable.
One of the challenges the Linux community has is that there are so many distributions or 'distros' to choose from. What is more, the feature sets of those distros form a non-planar non-directed graph of nodes for which it is very very difficult to reason about their feature content.
Consider for the moment if you could reason about distros like you can reason about genomes. In modern genetics we can look at flora, fauna, bacteria, etc and talk about them as a base class "foo" with specific genetic changes that get them to be a "bar", we can talk about speciation where the genetics are different enough such that hybrids are infertile or impossible.
This effort tries, and I think largely succeeds, in capturing the "minimum viable Linux system" (all up kernel + user land). It gives you a way to reason about two different distros like
Ubuntu is Kiss plus (list of things) and for each item in the list of things, it has dependencies, recursively until you have completely described the genome of Ubuntu as a linear directed graph from Kiss. You could do the same thing for Fedora, or Mint, or any distro.
Two such graphs could give you a complete list of differences to get from distro <a> to distro <b> and the simple algorithm of walk back (remove items) from <a>, then walk down the graph (add items) to <b> would always "work" because they all share a common ancestor node, Kiss.
As a software developer you could reason about all of the changes to a distro you would need for your software to work, and this would tell you if those changes included removing something that was part of the canonical set of things in a distro, your software would not work on that distro.
Today, individual package managers do this in a distro unique way and let you work on one distro. As a software developer you end up with the "Fedora/Redhat" version, the "Ubuntu/Debian" version, maybe the "Arch/Gentoo" version.
But if the Linux community settled on a common understanding of the "genetic code" if you will, you could do that for all possible distributions. That would be a powerful thing.
I enjoyed reading the list of explicitly excluded software. My current approach to installing linux on a laptop is to start with xubuntu, add dwm, and remove the desktop, ibus, pulseaudio, and a bunch of other stuff. I discovered that Firefox will not play sound without pulseaudio! My next project is removing dbus, but many things on the system seem to depend on it. I think linux is a beautiful thing, but the mainstream distributions are messes of competing, redundant subsystems and massively bloated interface layers that do very little except make the system less reliable, slow, and hard to configure.
I personally would have used another language, but shell ain't that hard if you have some experience with it, and are using shellcheck as the author does.
The config formats of the distro are optimized for command line tools, and writing the package manager in shell seems like a good way to ensure that this is indeed the case - basic dogfooding.
As opposed to programming languages with constructs that allow proper code hygiene. That implies an actual type system, a usable standard library, etc.
Shell is great for scripting usage of other programs, far better than other programming languages. That's what it's made for, after all. It's the only thing it's made for.
I do appreciate how they don't even have a backend for the website (though saying that, I'm not sure why most Linux distributions would require a full backend for their website)
Because it reflects how the writers solve problems. They needed a website. Did they spin up a docker container of wordpress, install a theme and some plugins until it looked the way they wanted, and write content in an in-browser WYSIWYG editor? Or did they write their content as HTML, and copy it to a server running HTTPD? Those two solutions represent different values, different philosophies, about how to use computers. It is good to see that the author's website-building approach is consistent with their operating-system-building approach. It lends credibility to the sincerity of the project.
No, it primarily reflects the problems. For example, the Gentoo and Arch Linux run MediaWiki. Does this tell you something about their "values or philosophies about how to use computers"? No, a static site generator would simply not be practical for either project. What renders documentation HTML in the background tells nothing about either project's approach to operating system development.
> Explicitly excludes the following software: wayland
I find it curious that a distro espousing simplicity would choose XOrg over Wayland (which was created to resolve problems arising from unnecessary complexity in XOrg). It looks like this is 80% simplicity and 20% sentimentality. I don't use wayland myself, but simplicity is certainly not a factor in that decision.
Wayland strikes me as the "shiny new thing" that is not ready for widescale use. If I was designing a distro and could only choose one I would choose X unless it was specifically a distro for shiny things that break.
Is the total system complexity less with Wayland? It seems to push a lot of functionality out to the edges. X isn't really all that complicated, just crufty...
Straight to the tribal, political realm. What do you expect to change? Should we say “Grandpa” instead? Should we try to include the experiences of everyone to a point where common sayings or one off phrases must be dissected for malicious intent? Come on
The reason why I must die lies in music and long toes; I simply do not comprehend many people anymore in these areas. When I was young, and yes, this is a while ago, we had real men and real women and I do understand that was way too binary but the pussyfooting around everything really goes way too far. That is me and as such I have to go before I do not understand anything anymore. I cannot believe people who want to be immortal will not go clinically insane by the weirdness that they see, decade after decade.
You can always move to a more conservative society. I live right on the other side of the globe and look at what's happening in the US with its fifty genders and SJW harassment for using the word "he" as pure lunacy.
There are other, much more serious problems over here, of course.
I live in Portugal (moved from Spain) and yes, that is true on both counts. I find it difficult to live with people who get upset over everything though. I would would say 'grow a pair' but that would not be correct... My grandfather says 'if you survived through a real war you would not care about this crap', but that would not be correct (and I think that's a bit over the top, but he survived 2 world wars)...
30 years ago, trying out the distribution of the month was kind of cool, nowadays they just contribute to the endless fragmentation of GNU/Linux ecosystem.
I bet that during high time of UNIX wars no imagined it could get even worse.
I can assure you that it's not worse. There are only two or three distributions that are credible at the corporate level, and they are far more similar than the major Unix wars platforms.
As long as people realize that virtually all of these distributions are experimental, either as a testbed for ideas that may find their way into mainstream distribution or on a slow path to adoption to become mainstream, the diversity is a good thing. It removes the shackles of compatability that hold most operating systems back.
Unfortunately, the developers of many of these experimental distributions don't seem to acknowledge that they are experimental distributions ...
The 'ecosystem' you are speaking of reminds me of taxes, lawyers and judges, burying their victims under infinite amounts of stamped paper in triple copies, annotated, and ultimately annoying because of its bizarreness and kafkaesque absurdity.
Been a while since I've used a Linux distro, let alone one that focused on being basic, but it feels like there is a lot of overlap with what Slackware was last time I used it. To me the whole point of "Simple" is to make the system comprehendible and Slackware did a very good job of that.
(Or for that matter, given the functionality presented here, I think OpenBSD as well)
Which package is the Linux kernel build stuff related? I recently wanted to bundle a quick Alpine install into a shareable `qemu` image. I couldn't figure out how to add a simple kernel module that was not included in the stock config. I would love to get this problem solved.
Yes, we finally were just about to get Linux on the desktop just the next year, and this distribution ruined everything. Now we will have to wait until 2022 to get Linux on the desktop. Curses!
Depends on the distro but mostly OpenRC, SysVinit and Runit. I find that every one of them is better than systemd. The design philosophy of systemd also goes against everything I stand for.
While I agree that Arch is far from minimal, it is one of the distributions that encourage people to build their system up to meet their needs. This is a far cry from the attention grabbing desktop distributions that may or may not offer a server version that can be built upon (which I, perhaps unfairly, interpret as discouraging the build-up approach).
I disagree. That was true for a time, and i enjoyed it coming from NetBSD. But nowadays, if for any reason you don't want to use systemd, or have different opinions about partioning, e.g. limiting access and execution rights and usage for /var, /tmp you are in for an eternal uphill battle against the decisions of 'upstream' and therefore the 'community' which devoutly adheres to these. There are some derivatives addressing that, but at that point you can also say: "Forget it!"
> [...] have different opinions about partioning, e.g. limiting access and execution rights and usage for /var, /tmp you are in for an eternal uphill battle against the decisions of 'upstream' and therefore the 'community' which devoutly adheres to these. There are some derivatives addressing that
I'm curious about which use cases wasn't supported for you, why it was an uphill battle and which derivative addressed them. I don't quickly see what would give you issues at first glance.
There are lots of good, simple window managers (not to mention other utilities) for xorg. Window managers in wayland have a lot more work to do, and are typically more complicated.
There are lots of utilities that aren't available in wayland yet, so you'll often end up with an x server installed anyway.
* Based on musl libc, busybox and the Linux kernel.
As some one who is working on Go ecosystem, this drops my interesting immediately. Using musl libs means a lot of tools can't be used on this OS unless we install the glibc.
Are there problems with go on musl, then? Not a heavy go user but I’m sure I’ve built a working go setup fairly painlessly on musl VM before. (Maybe it’s a recent problem or perhaps I’m misremembering and I did something horrid like pulling in glibc too, but I don’t think I did?)
Being unsure of my recollection, I've just bootstrapped and installed the most recent go 1.14.3 release on a pure musl x86_64 chroot. Everything I can think of to test so far is working impeccably.
> Based on musl libc, busybox and the Linux kernel.
Busybox is not KISS. It's an all-in-one package, removing the ability to strip down the number of tools you have to the ones you strictly need. It's a big, bloated binary blob, in other words.
> Every installation of the distribution contains the full sources (of the distribution) with git history attached.
That's not strictly true about Busybox. You just need to recompile it to change what's included. You can absolutely enable/disable individual tools in the menuconfig.
"How do I remove a package and all of its dependencies? [...] The package manager does not do recursive dependency removal on removal of a package. This error-prone automation will not be added to the package manager. "