Hacker News new | past | comments | ask | show | jobs | submit login

The user experience of Nix is terrible. I’m not sure that it’s even fixable.

But Nix or something like it is going to take over the world.

Dynamic linking by default is absurdly stupid and mostly motivated by GNU politics. It’s a terrible problem. Many (most?) of the users of Docker don’t even realize that this is the problem Docker is solving for them. But they know they have a terrible problem and Docker helps a lot.

Nix is Docker on steroids. It’s Docker that got bitten by a radioactive spider.

Linux namespaces and cgroups and BSD jails have been around. Docker made it Just Work.

When someone does that for Nix, which is basically Docker done by computer scientists, it’s game over for anything else.




You can use Nix to build OCI(images) and run them on Kubernetes if you want to.

Nix is a souped up package manager, Docker is a container runtime.

Nix depends on packages existing in /nix Docker chroots into a "folder" and runs a command(+many more things).

Lets not mix technologies up for the readers too much.


I’m well aware that Nix can produce container images. Xe has a great post about it.

People use Docker for a lot of reasons, but mostly? Same Dockerfile, same outcome, mostly every time. No one is moving /usr/lib/x86_64 around under you. It’s a real sea change, on the order of revision control: we hadn’t even realized that we were living with constant low-level anxiety that someone was going to break our computing environment at any moment. “sudo apt upgrade —whatever”, eh, maybe next week, we’ve got a release coming up.

Calling Nix a souped up package manager is like technically correct maybe?

It’s ‘git reset —hard HEAD^’ for your whole computer or fleet of computers. It’s utterly fearless experimentation, it’s low/zero runtime cost isolation and reproducibility.

It’s early days ‘git’ for systems: pain in the ass to learn and use, frequently and credibly accused of being too hard for mortals, but profoundly game changing.

Whether Nix per se remains the plumbing, someone is going to do good porcelain and end DevOps as a specialization, along with Docker and Canonical and mandatory glibc nonsense and a thousand other things that have overstayed their welcome. Disks are big now, we can have a big directory full of hashes. We can afford the good life.

It’s going to be a big deal.


Nix is a programming language/system for software packaging. Along the way, it turns out OS configuration is also just packaging... As long as you don't have anything too dynamic.

It is the most sophisticated programming language/system for software packaging on the market today.

We use it for all our reproducible software development environments. All of our project templates. And all of our employees are issued NixOS configurations so everybody on the team has the same NixOS, the same Nix package overlay... Etc. This level of consistency ensures that we get a level of reproducibility from hardware to OS to user profile to development environment to CI/CD that just doesn't exist anywhere else atm.

It's not entirely perfect but that's not really Nix's fault. It's actually the fault of the rest of the software industry that persists with bad tech. All of Nix's complexity stems from having to package everybody else's mess. Anyway one day I hope the CI/CD systems in the world will just provide Nix jobs directly instead of proxying through a docker container image that I have to setup atm.


Your shop sounds pretty ahead of the curve. “It works on my box!” “What do you know, mine too!” “Production person over here: I’m getting the same thing!”

It seems to me that people who go through the agony and ecstasy of getting it set up do so because the difficulty of the domain is high and the headcount is low: you need leverage in that setting.

Can you share anything about what motivated your group to get it dialed in?


My company matrix.ai was working a new cloud orchestration platform and Nix was core to how customers would package their applications/containers for deployment. The OS development is halted for now while we are working on a secrets management system.

So it was only natural to fully dog food Nix. We also introduced it to clients during our computer vision machine learning consulting work. It was the only way to get reproducible projects involving a complex set of Python dependencies, Tensorflow, CUDNN, CUDA, Nvidia libraries (there is a very strict set of requirements going all the way to hardware). I actually first tried doing it with Ubuntu and apt, it did not work. Setting up your own nixpkgs overlay is a must in these scenarios.

It is definitely something that is easier to fully dial in when you start from scratch. It's a comprehensive system so it will take time for adoption. I always recommend starting with it as a development environment tool first, then consider automating your OS conf or user profile or VMs... etc.


What was the orchestration system used for? Was it in the case where there were many models that needed to be run one after another. I know it's a huge problem in video processing to be able to increase speed a ton. My company Sieve (see profile) is building infrastructure specifically for running ML models on video which is why I'm curious.


It was built for AI driven container orchestration, configuration synthesis from high level constraints.

Yes ML workloads is particularly complex, because they have both batch oriented data flows (training), and service oriented data flows (inference). There aren't many systems that can adequately express both.


In the more philosophical sense I think you're right. I've been using NixOS for 2 years now on all my machines* but I still don't know the language.

I use it as a stable base system, but when I need to do something that I'm unable to do with Nix I still drop into a $DISTRO container and do my work there where everything is stateful and "disgusting".

Having worked with DevOps for awhile I can happily tell you that we don't run around building Docker containers all days, at least in my team the developers do so. We provide them with a base to build upon, provide CI and stateful services like databases and storage.

I spend an awful lot of time writing Terraform and Helm charts though, since things needs to run somewhere.

But yes, the immutable nature of Nix is great. Nix was VERY helpful for me when I switched GPU's from NVIDIA to AMD on my desktop, no screen? Reboot, reconfigure, retry.

But yes, I agree. Something that resembles Nix will take over the computing world, as someone said somewhere in the comments of this post. The great thing about Nix is that when you fix something it stays fixed. Even if it was a PITA to get there.


I believe that we’re in violent agreement :)

I used the exact same words “things stay fixed” elsewhere in the thread. Which is big psychologically. I’ll do some really difficult or even painful things in good humor if I get a lasting benefit. But if someone or something is just going to yank the rug underneath me next month? Fuck it, hack up some doofy shell script and call it a day.

Nix aligns the incentives on getting things right. I’m happy to learn the arcana of some weird corner case of some tool or service if I can apply that in a way that is permanent.

Now we just need someone to unfuck the UX nightmare :P


> Same Dockerfile, same outcome, mostly every time.

Uhm, no? Dockerfile has tons of side effects:

- doing `apt-get update -y`? On some machines it will run, on others it won't be due to caching. - Using `FROM` that isn't locked to sha256? Well, sometimes you will get version 1.2.3 sometimes you will get 1.2.5. Sometimes a new one will get tagged with the same tag. - It literally has network access during the build, unless you include a hash of what you're downloading, there is zero guarantee it will be the same download.

I think the majority of leaf containers rarely get the same result with the same Dockerfile. The only thing that is guaranteed with docker is that the same image will be the same image, but ensuring that different machines pull the same version of an image is another story.


I’m being a bit generous to Docker in the above comment: I believe that this is what users are hoping to achieve and getting closer to achieving than they would otherwise. Docker is basically a roundabout way to get static linking behind Drepper’s back. Almost no one is using it to bin pack such and such CPU/RAM to the request serving process and the batch processing process given finite SKUs.

Modulus the absurdly high barriers to entry, Nix is trivially better for this purpose.


I don’t think mixing up the two was parent’s point at all. He/she just made an analogy of where Nix currently is vs where should it go to “take over the world”.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: