Hacker Newsnew | past | comments | ask | show | jobs | submit | Hnus's commentslogin

Serious question: do people actually enjoy writing Ruby? I feel I’m writing in something like Bash. I never felt this way until I picked up other languages like Rust, Zig, C#, and learned a tiny bit of programming language theory. After that, the loose and squishy feel of Ruby really started to bug me. Also, it seems like every Ruby programmer I know only ever uses other dynamic languages like Python. It’s never like they’re experts in C++ or something and then decided to start programming in Ruby.


I had a good background in C++ programming before switching to ruby. At first, I was terrified of the lack of strict typing, but after using it for a while, I realized my concern wasn't that warranted. For me it is about the tradeoff of dealing with types vs productivity. Sure I occasionally get bit by a random "method not defined for nil" error, but it is usually very easy to fix, and I don't run into the issue very often. With ruby, and especially rails, it is about the productivity gains. I can simply accomplish much more in less time and fewer lines of code than I would in other languages/frameworks. Not only am I writing fewer lines of code (usually), the language is very expressive without being overly cryptic. The code is more readable, and to me that results in better maintainability. The strong community and ecosystem emphasis that is put on testing, also leads to more resilient and much more maintainable code.


I disagree, I think weak typing significantly lowers developer productivity. Because your IDE gets lobotomized. Types aren't just for people, they're for programs. If I can't go to definition or go through the control flow that's a problem to me. I program in PHP - I get it. I have to live in the debugger because my IDE is worthless when I'm using bespoke arrays for everything.

Also, most statically typed languages have very robust type inference. If you don't like writing types that's fine - the language can just infer them 95% of the time. A lot of times you can open up a C# file and find next to no types explicitly written. But if you hover over something in your IDE, you can see the type.


Absolutely. I enjoy it so much that I wonder "do people actually NOT enjoy writing Ruby?" It's usually the first tool I pull out of the toolbox for DSLs, scripts, spikes, one-offs and the like. A lot of the time, the project will happily stay in Ruby unless there's a good reason to use something else. And then I move it - horses for courses.

I programmed professionally in C, C++, C#, Deplhi, and a few other languages well before I had even heard of Ruby.


Yes, love it. Rewritten large parts of my stack in it (editor, shell, font renderer, terminal, window manager, file manager)

I started from a background of heavy C++ use, including a lot of template metaprogramming. Convincing me to even give Ruby a chance took a lot, but once I'd tried it I abandoned C++ pretty much immediately, and don't miss it.


What an odd question lol. Yes, people like writing in Ruby. I’m one of the. Switched from C# in 2016.


You don't miss things like enums, exhaustive switch or any other basic language features? How about `method_missing` its such a crazy idea to me that something like this exists, I know why it exists but I am like why, why such bloat and complexity.


Ruby inheritance is a list of class names. When you call a function on an object, Ruby goes up that list, looking for the first one that defines that function.

If it doesn't find any class defining that function, it calls `method_missing` (and again, goes up the list). The Ruby base object class defines `method_missing`, so if no-other classes in the ineritance list do, you get that one (which then throws the usual error).

IMO, there is zero bloat or complexity added by this; it's super simple language bootstrapping (allowing more of Ruby to be written in Ruby, vs the c interpreter).

What do you see as the bloat and complexity added by this?


No, I honestly don’t. I can emulate an Enum without having an Enum type. I rely less on a compiler and more on myself with automated tests.


  >  rely less on a compiler and more on myself with automated tests
jme, but i think this is a muscle that a lot of people don't have developed if they came from a language/toolset/ide that does automatic type checking and autocomplete reliably etc


Part of the problem is when you have to rely on someone else.


Can you elaborate on why you think method_missing is bloat?


As another commenter said,

> it’s about your taste and philosophy.

Personally, method_missing goes against both of mine. It makes programs harder to reason about, more difficult to debug, and nearly impossible to `grep`. That said, I understand that this kind of flexibility is what some people like. I just don’t.


That's not a serious question. Of course people do. Your inability to understand the language does not impact anyone else other than yourself. This should go without saying.

I'm also an expert in C, Go and JavaScript. Ruby is an excellent language and the smalltalk paradigm has some real strengths especially for duck typed systems. The only reason I don't use it more often is because it is slow for the type of work I'm doing recently.

It was amazing for web work and it's fantastic for writing small little utility scripts.

A open distaste for things does not make you sophisticated or smart. You're not in any category of high repute when you do this.


I love Rails, its been my to-go framework for reference. But I could never get as confortable with Ruby as writing JS or PHP. I do not know the reason.


I agree. I think..there's too much freedom. Too many ways to do things, and debugging is hard with monkey patching.


If debugging is hard to you in Ruby because of monkey patching, it's an issue of not knowing the debugging tools. Attach pry or Ruby debug, and show the source location of a method, or log them. This isn't surprising - debugging Ruby is different to debugging most static languages, and more tutorials on how to do this well would be nice...

Also the use of monkey patching in Ruby peaked something like a decade and half ago. Outside of Rails, it's generally frowned on and introducing new methods is usually addressed by opting in by including modules these days.


Agreed, it still absolutely astounds me the number of developers out there that do not use a debugger as an essential part of their toolkit.


Can you give an example of where monkey patching made debugging hard? I have a decade of Ruby experience and can't think of a single time it was an issue

This is one of those things that sounds like it'd be a problem but it really isn't


I spent more of my life that I would like to admit learning and writing Rust. I still build all of my web applications in almost pure Ruby these days. Speed of thought to action is simply unparalleled and it turns out in most situations that was the most important factor.


I do. It's a whole thing that get you down to writing your business logic in an expressive way very easily. Framework (Rails) helps, yes, but even pure Ruby can be nice. I've written a second time accuracy simulator for cars and chargers in a EV charging stations in pure Ruby, that was fast to iterate around and pleasant to write.

The ecosystem, toolchain and all do a lot. It is really missed when I do other languages, and I wish to find the same way of developing elsewhere. I currently do C for embedded in an horrible IDE, and I want to bang my head against the table each time I had to click on something on the interface.

(btw Python is a nightmare for me)


Yes. I do. I enjoy ruby so much.

After 10 years working with Java. Now I dont wanna go back anymore.

It is about your taste and philosophy. I dont think it related skill issue.


> Also, it seems like every Ruby programmer I know only ever uses other dynamic languages like Python. It’s never like they’re experts in C++ or something and then decided to start programming in Ruby.

Can you expand on what you’re saying here or why you’re raising this is as an issue with ruby the language or rails the library?


There are several people earlier in this very thread who moved from C++ to Ruby.


Just a personal observation that made my communication with ruby developers hard as I cannot use concepts from strongly typed languages because they live in a word without them, but I guess it's more issue with me than them.


Yes, many people love programming in Ruby. It’s a matter of preference not some lack of technical merit. There are plenty of people who are well equipped in strongly typed languages that write in both. You might not know them but you really don’t have to look very far.


yes, I have used a lot of languages, both static and dynamic, and ruby is one of the ones I love. maintaining large code bases is certainly not its forte, but in terms of expressing what you want in code it is like a tool that fits really well into my hand.


Whats the salary in comparison to lets say full-stack webdev.


A lot of these kind of skills arent always applicable or comparable to a salary position.

Many do odd contract jobs that are extremely high value; i.e. come in a fix this super big bug or add this super important feature on a COBOL system at an extremely high day rate because its hard to find people with appropriate skills.


FWIW when I worked in a Finance company with a lot of Cobol + JCL + DB2 devs (including in management so I could see more info) their salaries were on average similar to Full Stack but possibly lower, especially as we put more AWS emphasis which those people started getting more premium salaries. Some banks I hear give cobol premium but it seems to be more specifically very specific mainframe systems experience + cobol.


But what do those Full Stack engineers make? Salaries are hugely variable across the industry. There are “senior” engineers making 60 k$/yr, and new grads starting at 200 k$/yr


I would say $100k-$175k would be the average range depending on seniority, including bonuses and such.


Generally lower, like other devs in the kind of giant organization that has mainframes.


Can you debug zig in any MS/jetbrains IDEs? I type in nvim but debug in whatever has the best experience. I think I asked this question like 2 years ago and was told you can write tests, use lsp server or look at assembly.. has situation improved?


I use VS Code on Linux to debug Zig. Haven't tried the others you mentioned, but it just emits standard DWARF symbols, so I'm guessing if you can debug C/C++ you could probably also do Zig with minimal changes? I just use the lldb VS code plugin[0], which works out of the box for me with no issues.

https://github.com/vadimcn/codelldb


I've been able to debug Zig in Windows by simply opening the .exe file with Visual Studio. I didn't explore much what can be done in it but it is possible.


At least DWARF is supported (e.g. any gdb or lldb frontend works, e.g. what various VSCode extensions like CodeLLDB offer). Not sure about PDB support on Windows actually.

This also means you can transparently debug-step from Zig into C code and back, which is kinda expected but it never gets old for me :)


Not so sure about any real IDEs, lldb has worked fine for the (fairly small) zig programs I've worked on and the "CodeLLDB" vscode extension worked. Of course with the move from LLVM i assume lldb will stop working, and vscode may not be a good enough debugging experience.


AFAIK the LLVM backend won't go away in the standard Zig distribution even with the new non-LLVM backend.

But even without LLVM backend I would expect that Zig will be able to produce DWARF debug information.


The best debugging experience imo is using gdb and rr within nvim. Works for zig, c, rust, etc. with minimal configuration in nvim. The less I leave vim the more productive I can be. Same thing probably goes for emacs although I will never admit it.


I’d love if you could elaborate on your setup. Are you using something like nvim-dap from within neovim or something else? I’m still trying to improve my debug experience in neovim.


Would also love to hear more. I have nvim-dap set up for Go and for C and it is an OK experience but I would not call it great. This is something on my Neovim todo list.. improving my debugging experience.


Obvious but it never occurred to me that experienced person sees far fewer parts than layman when looking at the board, for expert its like yeah "file read there", "serialization here" (if I were to use programming analogy) but layman sees gazillion little parts magically working together.


"You get used to it, I don't even see the code. All I see is blonde, brunette, redhead."


I think cockroach, beetle and woodworm is more appropriate when talking about code.


They were quoting "The Matrix". But you're probably correct, even in the movies the existence of "The One" was a pretty serious security bug, and the entire need for "agents" was a workaround for other bugs.


Is there some official document describing USB naming convention or reasoning behind those seemingly randomly chosen combinations of numbers and letters? Usually when products have confusing names you can at least see how by deceiving the customer its benefiting the whoever is making the product but with USB cables??


I doubt it, but if there was I would expect it to be several thousand pages of highly technical jargon.


Can somebody more knowledgeable confirm if all your coins will become forever tainted if you are "dusted" like this? As there is no way how to break ever break the paper trail using just bitcoin is only way how to make your coins clean going to monero and back again or something like that? Are techniques determining if your coins are tainted or not on exchanges where they could be refused or confiscated sophisticated enough to not flag you in cases like these? Even if its possible I imagine its computationally expensive.


Yes. Ethereum does not have coin control[1] which means that your entire ETH balance is inextricably commingled in a dusting attack, whether you like it or not. That's different from Bitcoin, on which you can choose to not spend tainted coin in your wallet (and prove the provenance of your funds).

[1]: https://bitcoin.design/guide/how-it-works/coin-selection/


Regulators aren't stupid.

There will be a hotline or process for reporting your having been dusted. You call, let them know, they confirm, they give you special dispensation to move the tainted funds to a burn address most likely. They don't care the ultimate location in which the funds get locked down, only that they do.

That Ethereum allows for disting won't hamper things the least bit. However a lot of customer service is probably going to have to be accommodated, so if you do get dusted, I sure hope that wasn't your only financial lifeline, because it may take a while to work through.


Hotline you describe seems like pure speculation I find unlikely but I have no idea how the law works but few questions naturally come to mind. Customer of which service? Provided by who? Agreed upon using what contract? Also having to burn your entire pile of cash because of one spec of dust is really funny.


It is actually worse than you think, the entire account ends up having "interacted" with a "sanctioned" entity :( account owners may be subject 10y in jail if any prosecutor would bring a case. This is true for any tokens and NFTs associated with the account as well as the ETH.


If I were to speculate I think the absolute brutality of initial learning curve filtered anyone who perceives the usability as a problem so it never got improved.


No, the sad reality is that doing UX change are really hard and need deep change to the nix codebase. The nix codebase is more than unwieldy plus most people interested in UX... do not get paid to work on nix. Quite simply.


a good deal of users who overcome the usability issues of nix remain ill-equipped to contribute to nix beyond basic packaging. just because i can navigate Linux userland doesn’t mean i have a clue how to navigate the kernel, is i hope a not-terrible metaphor.


Nixpkgs and NixOS also some prolific contributors who never had any prior functional programming experience, so clearly there's a way out. We just have to capture and reproduce that.


I still can't really program in a functional programming language.


Exactly. You've learned what you needed to in order to be productive and achieve your goals with Nix, and that hasn't required you to stop everything to take a crash course in functional programming.

Individual functional programming techniques may become interesting to you as you become curious about some implementation details— you might read a bit about 'fixed point recursion' after you chat with some people about how overlays work. But you don't have to think of yourself as a functional programmer to make great use of Nix or even to help solve technical problems in Nixpkgs.

I think some folks hear all the FP talk surrounding Nix and don't realize that there's a lot they can achieve and contribute without being FP wizards. And that sucks. I hope that the ecosystem feels increasingly welcoming to people like that as the documentation and UX continue to improve.


I have never felt more dumb than when trying to use and learn anything nixos related. Its awesome technology not meant to be used by humans. I am using cs.github.com, open ai chat bot and usually like 4 tabs describing the same thing from different months using different commands + few versions of what seems like official manual while still having zero idea what I am doing. It feels more like I am reverse engineering something rather than using it. There is something seriously wrong with what is acceptable ergonomics by whoever is designing the user interfaces. I really wish they fix all their problems because when it all works its awesome.

I wanted to give guix a shot does anybody know if its any better ignoring their stance on non-free software as I think you can work around it.


Guix interface is a bit more friendly by default, but once you enable the 'nix' command on NixOS, there really isn't much of difference in terms of basic CLI experience.

The big different is the language and I think Nix wins here by a mile. Doing everything with Scheme just leads to layers of macro spaghetti that I really did not enjoy to dig through. Error messages that tell you absolutely nothing about what went wrong were pretty common. Nix has those too, but less frequently. Also with Nix you just use regular shell scripts snippets for the building the packages, Guix wants you to do it all in Scheme. Package selection on NixOS is much bigger.

Another big thing, Nix has Flakes, which make it trivially to turn all your Git repositories into Nix packages. Your Git repository becomes essentially a first class citizen in the package manager, making it completely trivial to run different versions of the same software. Guix has none of that, they still treat packages as a separate thing from the software itself and trying to add third party packages involves quite a bit of overhead. Easily up or downgrading individual software isn't possible as far as I can tell, you have have to roll back the complete Guix system to do so.

Basically, after switching from Guix to NixOS, I can't say I missed anything. NixOS just felt like a more polished and feature rich version of what Guix was doing, which given that Guix is basically a NIH version of Nix, is understandable.


> Another big thing, Nix has Flakes, which make it trivially to turn all your Git repositories into Nix packages.

I have no doubt that this is true, but I still feel dumb about it. I run a Plex server (among a few other workloads) on a NixOS box, and just figuring out how to update the Plex binaries off of the official master branch was a multi-hour back-and-forth endeavor between StackOverflow suggestions and compiler errors.

I really do love Nix though. And once I got it working I felt a LITTLE bit smarter, AND it's worked flawlessly for months.


I’ve said this elsewhere in the thread and might do so again but it bears repeating: use flakes. At first they seem like this weird semi-supported thing, but eventually you realize that Nix is badly broken without them.

Flakes aren’t “experimental”. “Experimental” is what happens to your sanity without them.


> Flakes aren’t “experimental”. “Experimental” is what happens to your sanity without them.

Okay, I like flakes, I use flakes, I enable flakes without much worry. But. They literally are an experimental feature, and they are still enabled by setting "experimental-features = nix-command flakes" in nix's config.


I think I was maybe too casual with my language. Flakes are clearly marked experimental in the documentation. But they were a no-brainer a year ago, they’re always “a few months” from being official, they map directly onto users’ intuition around lock files, they enable principled composition of shit that doesn’t live in nixpkgs, the list goes on.

Outside of some (hypothetical to me) edge cases, not using them is masochism.

Flakes have a standard library (“flake-utils”) and and a pimped out version (“flake-utils-plus”).

Making it a weird gymnastics trick for a novice user to even access them is the kind of shit GHC devs say, “no, that’s too user-hostile even for us”.

Use flakes.


> a multi-hour back-and-forth endeavor between StackOverflow suggestions and compiler errors

Sounds like the authentic 90s linux package experience! (except with 's/StackOverflow/mailing list/')


How do you do it? I literally remove the flake from my nix profile and re-add it with a specific commit hash. I had troubles that just specifying master didn't update to the current master.


I cheated and did a stateful thing (because I couldn't figure out flakes): sudo nix-channel --add https://nixos.org/channels/nixos-unstable nixos-unstable

and then I added this blurb to my Nix config:

nixpkgs.config = { allowUnfree = true; packageOverrides = pkgs: { unstable = import <nixos-unstable> { config = config.nixpkgs.config; }; }; };


> Doing everything with Scheme just leads to layers of macro spaghetti that I really did not enjoy to dig through.

On the flip side, it’s a real language, well understood, with a real specification, working tooling, and community support outside Guix (or in the case of Nix, outside the package manager).

As someone who gave up NixOS exactly because of the incomprehensibleness of the Nix language, I know what I would choose.

There’s no doubt at all.


I see nothing especially incomprehensible about Nix, the language itself pretty simple. About the only thing I had a bit of an issue with is that I kept forgetting that functions only take a single argument in Nix. But other than that it is very nice to work with and has all the syntactic sugar you want, string interpolation, sets/map, sane multi-line strings and all that. Syntactic sugar is an area where Scheme has basically nothing to offer, everything needs calls to functions with long names.

For most common uses you barely even have to care about the language, as it's just some JSON-like data with shell scripts in between that do the actual work.


The language very much generates complaints because it is gives you no real help understanding what argument should be passed to what function and how to get that argument. Error messages are dreadful because there's often a million functions your object has to go through before it produces an error. Dynamic typing and lazy evaluation don't seem to mix. Some kind of compile-time, presumably gradual, type system would really be an incredible improvement, so that type errors were generally eager.


> Dynamic typing and lazy evaluation don't seem to mix.

Incidentally, this is one of the things that some of the people involved with the blog post in the OP have set out to fix. A few years ago, Théophane (author of the blog post) started an experiment to add gradual typing to Nix, which eventually became the start of Nickel: https://github.com/tweag/nickel

Today, Nickel development continues at Tweag, but now they have someone else, Yann Hamdaoui, driving the implementation forward. (I assume because he's a domain expert in programming languages.) Producing helpful error messages is one of the major goals of Nickel, as a potential successor language to Nix. All of the language design decisions are documented as GitHub issues on the repo, so if the technical side of this kind of problem is interesting to you, you may enjoy browsing that.

It appears that Nickel is very close to a usable state for early experimentation. You might be interested in cloning the repo and messing around with it!


Honestly I’ve done enough scheme to understand that I wouldn’t touch anything related with scheme in a million years. People who think otherwise probably don’t understand why people like javascript and python and will never write any product that catches on.

The nix language is pretty nice on the other hand. The only thing I’m missing is a language server protocol thing so I can go to definition


> People who think otherwise probably don’t understand why people like javascript and python and will never write any product that catches on.

Ironically, this has been posted on a popular site written in a dialect of scheme:

http://arclanguage.org/

https://github.com/arclanguage/anarki


Where is the irony? HN does not expose scheme to the user.

Don't think the parent was saying that it's impossible to write a good program in scheme/lisp, more that scheme as user interface for a software system is a hard sell... which seems anecdotally true.


how is this ironic? HN was written by one person, and nobody needs to care what language they used to make it work.


rnix-lsp is a language server for Nix that provides go to definition. I'm using it with the "Nix IDE" extension in VS Code. It doesn't handle imports though afaict, so it's only useful for local definitions.


> Error messages that tell you absolutely nothing about what went wrong were pretty common.

Maybe I'm holding it wrong, but I almost never get useful error messages from Nix (NixOS and NixOps). Almost every error is deep within some module with no trace to the option I set. Just nix barfing its 300-line eval traceback where my own code doesn't even appear.


Nix has a debugger now. You can drop into the frames that are causing problems and inspect the state. Miles easier than looking at the trace manually.


I last flashed my flake.lock in June, so I might be out of date, but my rigs can’t reliably include the line number in my code that led to the duck-typing fiasco deep in the core of someLanguage.withPackages.fuck.WTF.this.

Fix the fucking “—show-trace” thing folks. Seriously.


Out of all cases when I used --show-trace, I can remember it being useful only once or twice. I gave up and just sprinkled builtins.trace.

IIRC debugger comes with nix 2.9 and nixpkgs for 22.05 still has 2.8.x.


I was excited to see the debugger flag to nix build the other day, since I'm really struggling cross-compiling an RPi on BTRFS root sd image.

Of course running with the debugger flag resulted in an unintelligible error message and no debugger.


Any link to documentation on how to use it? Would be a godsend for me


I agree. I'm actually trying a hello-world example from the tweag website, and it complains about "gcc" not existing. I try adding

    buildInputs = [ nixpkgs.gcc ];
but it complains that nixpkgs.gcc does not exist. There's nothing helpful. I remember fixing this issue a month ago, and I have no recollection on how I did it.


Are you using flakes? You need to use the legacyPackages output for your system, e.g.

  buildInputs = [ nixpkgs.legacyPackages."x64_64-linux".gcc ];
In practice, it usually looks something like this:

  let
    system = "x86_64-linux";
    pkgs = nixpkgs.legacyPackages.${system};
  in stdenv.mkDerivation {
    ...
    buildInputs = [ pkgs.gcc ];
    ...
  }


So there is more than one way to skin this cat. ‘nixpkgs’ as handed to you as part of a flakes ‘inputs’ is (jargon alert) partially applied, which is to say you have to put a battery in it. In this instance what you have to pass is ‘system’ because nixpkgs supports many systems (e.g. ‘linux-x86_64’ or ‘darwin-aarach64’). It sort of makes sense that it wouldn’t know what GCC you want without that information.

You can do this up top by hand, e.g. P = inputs.nixpkgs { inherit system; }, but more commonly you would use a library like ‘flake-utils’ or ‘flake-utils-plus’ to issue system configuration for all supported packages to all your packages/derivations/devShells.


Ah right, I saw that library but it didn’t have examples and I didn’t manage to use it. Do you know a page that would explain that?


The ‘flake-utils’ readme is a pretty good jumping off point: https://github.com/numtide/flake-utils

I have this or that nitpick with FL and FLP but overall it’s very solid stuff. FLP is a little more “magical”, and that’s not always the best starting out, but you really can’t go wrong with either.


Maybe you got confused (like I did every time until I stopped doing anything non-flakey) about when `nixpkgs` is called `nixpkgs` and when it's called `pkgs`? In the flakes world it's considerably easier because everything is explicitly named, so you take an actual decision about what to call it.


Is the convention to call it pkgs when you instantiate it with a system?


yes. other possible arguments are overlays and package selection modifiers (like whether to include non-free stuff or stuff that is marked as broken).


I don’t know what is the specific file you are working on, but shouldn’t it be pkgs.gcc?


That's strange. GCC is part of the stdenv on Linux and though always available.

To correct your coffee snippet: nativeBuildInputs = [ pkgs.gcc ];


I’m on mac, I managed to fix it with https://mimoo.github.io/nixbyexample/flakes-packaging.html


And now I’m trying to understand how devshells work in flakes and there’s pretty no doc


So a dev shell as consumed by a flake is typically just a derivation. And typically the most important thing in that derivation is ‘buildInputs’, which is just a list of ‘pkgs.thingIWant’. It’s not uncommon to hook other “phases” of the derivation, which are (oversimplified) just little shell scripts that run in some order (‘configurePhase’ before ‘installPhase’) and typically expressed using the nifty “” syntax. For example if Nix is like stuffing paths into a Makefile that you want to use in your dev shell, you might put that in installPhase. Protip: a Nix “package” (derivation) evaluated in a string context (foo = “${pkgs.gcc}”;) gives you a path to where the thing lives in /nix/store.


Ask your questions on discourse.nixos.org please - the documentation situation in Nix-land is generally exactly as you have noted, which means centralised question-asking in well-known locations is much much more helpful for Those Who Come After.


I say this with the utmost respect for those who work hard to answer questions on discourse, but I have found it singularly unhelpful.

I strongly encourage the Nix community to embrace a mainstream Q&A platform like StackOverflow or GitHub Issues.

Nix is “weird” enough for beginners already, the kind of janky interface and uneven moderation/quality on discourse is IMHO self-defeating for Nix.


Agree. I feel the same way about OCaml, and I have forced myself to ask every questions I have on SO to build the list of answers there. Yet, people tend to answer on SO with “you should ask on the OCaml discourse”…


Aside: I love the Discourse UI. It feels easy and thoroughly modern to me. What's janky about it, in your opinion?


I also do not like it for different reasons (one of them is that they hijack ctrl+F)


Ah yeah, that behavior is annoying. At least if you hit it twice, you get the normal in-browser thing


Guix has package transformations, which lets you use different commits or git URIs for a given package definition.

It also doesn't require the definition of a package if all you want is an environment to hack on the code in a git repo. People use `manifest.scm` or `guix.scm` for these purposes.

Guix System services are in fact all built together, but upgrading individual software or services does not require "roll[ing] back the complete Guix system ".

(I want to point out that describing Guix as "a NIH version of Nix" is a symptom of the attitudes that made me almost completely retreat from discussions of software online. It's so grating to deal with dismissive attitudes like that again and again. Sucks all the fun out of hacking.)


> Also with Nix you just use regular shell scripts snippets for the building the packages, Guix wants you to do it all in Scheme

I'm by no means a good schemer, but I'd still take scheme over shell every time. Shell tends to devolve into something completely unreadable with even moderate complexity while scheme can make even the most complex logic nice to work with.

> Guix has none of that, they still treat packages as a separate thing from the software itself

I'm not quite sure what you mean with that, but it's sure possible (and easy) to just define a guix repo that points to all your projects repositories and make them available through the guix package manager. Guix repos are just git repos with a few lines of metadata attached.

> Easily up or downgrading individual software isn't possible as far as I can tell, you have have to roll back the complete Guix system to do so.

It's very easy actually. You declare a package definition as an inferior to a package and can pin that definition to whatever version, commit hash, tag, you want. You basically say "I want the package declared as name X, but I want it while the guix repo is at point Y. That way, the package manager knows what dependencies the package has and provide those with the right versions as well.

> NixOS just felt like a more polished and feature rich version of what Guix was doing, which given that Guix is basically a NIH version of Nix, is understandable.

On the contrary, guix is more like a more polished and dedicated version of nix. Nix's solution to hard-to-package packages is "We take everything that's needed to build this thing and freeze it in the repository" while guix will not accept anything that doesn't build completly from scratch and can thus be distributed as optional source code. The difference is very important because it allows for better packaging and more flexible versioning of the packages. Using guix, you will never run into a situation where the solution has you staring down at a binary blob and questioning your life choices.


Is there an actual tutorial on how to use the nix command? All the official docs for Nix seem to be slightly discouraging it as not ready where they do mention it.

It's one where the community is pretty clear don't use nix-env etc. but the tutorials only explain nix-env, with the only details of the new commands being the CLI reference manual which doesn't cover workflows.


Yes, my understanding is that the official documentation lags behind community usage, because flakes are still an "experimental" feature as details of the implementation are worked out.

For reference documentation, there's "experimental commands" in the manual https://nixos.org/manual/nix/stable/command-ref/experimental...


I'm finding nix-env to be great! I do not understand why nix offers a command and a package index, but then advises not to use it.

If people aren't supposed to use nix-env, why don't nix put a huge deprecated notice on every invocation and delete the nix-env command itself from nix?


I think the problems with nix-env come down to two things:

- nix-env -i and nix-env -u will almost certainly eventually do the wrong thing for you if you use them enough (nix-env -iA is ok): https://ianthehenry.com/posts/how-to-learn-nix/ambiguous-pac...

- It's not reproducible as it depends on the state of your nix-channel setup at that time. How much of a problem this is depends on your use case. nix-as-homebrew-replacemnt or nix-as-shared-package-list-across-osx-and-linux users are probably fine with this, as what these users care about is more portability than reproducibility, but nix-as-predictable-dev-environment types will be let down by it.

I think the fact that the latter affects different users differently and that the new flake world doesn't offer a "modify your global environment without the tool taking over" option (expecting you to use nixOS or nix-darwin instead) has meant that there's not enough motivation to agree on a replacement that nix-env could tell you to use instead.

I guess the eventual replacement will be "nix profile", but while you'll find at least some documentation on "nix develop" and "nix flake", I had to look in the CLI tool docs for this one, which makes me suspect it's even less final than the other two

So instead the community will tell you to avoid nix-env, but there's no agreement to have the tool or docs tell you that.


It's not quite true that it's trivial to turn a Git repo into a flake, because flakes don't do submodules, and https://github.com/NixOS/nix/pull/5497 has languished :(


You can fix that by adding '?submodules=1' to the URL. It does get rather annoying when compiling local code, as that turns a simple 'nix build' into:

    nix build  "git+file://$(pwd)?submodules=1"
But it will work with submodules.

Another mild annoyance related to this is that Flake inputs can't be expressions, this can make it a rather difficult to fetch dependencies that aren't packaged in a way understood by the existing fetch methods.


as that turns a simple 'nix build' into

Which reveals another annoying thing about using flakes for development. It copies every time you build something the whole darn project into the Nix store. Fine when you have a small project, but with large projects this becomes a significant overhead.


> It copies every time you build something the whole darn project into the Nix store. Fine when you have a small project, but with large projects this becomes a significant overhead.

Isn't copying the repo to a read only location just inherent to the problem of reproducibility?


Not necessarily, most build systems allow you to separate source input and build output. So grabbing the source from the current directly is a possibility, if the build script is clean and doesn't do anything weird in the source directory.

In Nix you can get that via 'nix develop', which gives you a shell with access to the individual build phases and with a bit of tweaking, allows you to run them from your source directory without copying the source around first (see 'declare -f genericBuild').


When you build into the store, you have to rebuild from scratch. However Nix does allow incremental builds inside your source directory as well. If you spawn a development shell like:

    nix develop .
All the build phases are available as shell functions or variables (unpackPhase, patchPhase, configurePhase, buildPhase, ...). The phases depend on each other, so if you want to skip a phase some additional tweaking or setting of environment variables will be necessary. The "genericBuild" function is the top level entry point for a normal build, the source can be viewed via:

   declare -f genericBuild
and should give a bit of an idea what Nix is doing.


Oh, you're a lifesaver. Thanks! (I spent quite a lot of time Googling how to do this, and never found it.)


Guix channels (not to be confused with Nix channels) let you turn a Git repo into a package collection:

https://guix.gnu.org/manual/devel/en/html_node/Channels.html

I guess it serves some of the same purposes as Flakes, and it comes with other bells and whistles such as authentication (which I think is kinda important!).

For the rest, your description of Guix is largely incorrect ("has none of that", "you have to roll back the complete Guix system", etc.), but as a Guix hacker I'd be interested in better understanding your experience and perceptions if you'd like to get in touch with us on the mailing lists!


I’ve heard guix uses scheme, so I’m not sure I understand how it can be friendly (you (see (what (I (mean))))))


You aren’t dumb. Nix is that rarest of birds: something damned-near impossible to learn well that’s still worth it.

Learning Nix well from the Internet is a horror film. But… once you do it’s shockingly, actually worth it.

Getting anything complicated working/building under Nix is a PITA, the payoff is that when you fix broken things, they stay fixed.


I introduced Nix into our build setup as well. We are an acquisition within Google and it's been quite a fun adventure incorporating it alongside Bazel.

Nix has been great but I've run up against a few thorny edges. Luckily I've sent out patches for them; long love OSS :)


> more like I am reverse engineering something rather than using it

Perfect description.


I tried learning using several different resources, but what did it for me was the nix pills (https://nixos.org/guides/nix-pills/). I'd say it's becoming a bit outdated now with the nix standalone command, but the fundamentals are still mostly the same.


Perhaps in contrast to most people's experiences, I haven't found Nix hard to learn. I think it really helps if you have learned Haskell before. Nix (the language) is really a simple functional programming language, but Nix relies quite heavily on functional programming concepts like laziness, fixed points, etc. A lot of things are underdocumented, but if you understand the Nix language well, it isn't hard to look up definitions in nixpkgs. I can understand that it is all very alien and overwhelming if you do not have a grounding in functional programming.

The primary issue for me has been that Nix is a very deep rabbit hole. You can spend enormous amounts of time on making your configuration more functional and declarative. Pretty much like you can spend enormous amounts of time on customizing an Emacs configuration. It's hard to strike a balance. And outside declaratively defined infrastructure (servers), it's probably not really worth it. I could almost fully reproduce my NixOS system (there is always some mutable state left) with a single command. But takes many months of effort to get to that point. On the other hand, I can set up a fresh macOS or Fedora Silverblue systems with all my customizations in 1 or 2 hours and have to do that maybe once of twice a year? So, ¯\_(ツ)_/¯.

I think the balance is different when you manage a lot of servers and most servers can be defined as a function with a small number of varying parameters.

The other part of the rabbit hole is that software breaks frequently in Nix. Upstreams do not really develop things under the assumption of a non-FHS, immutable system. So, you were going to work on something, but before you know it is 30 minutes later because you ended up fixing some package you need and that broke. Similarly, you'll end up packaging a lot of stuff and spending quite some time making it fit the Nix mold (looking at you Python packages that mutate in-place, sigh).

I love Nix as a principle -- it's declarative, immutable, pure, reproducible. But in practice, you can reap many of the same benefits from impure, inferior alternatives, with far less work. Yes, Docker is an ugly duck compared to Nix, but it brings 90% of the reproducibility benefits and probably everyone on your team can be up and running in hours. Rust's cargo doesn't allow you to specify every non-Rust dependency exactly, but with a Cargo.lock file you can make most of your build reproducible. NixOS is a clean, immutable, declarative system, but other systems offer a subset of its features, such as atomic upgrades/rollbacks, immutable root, and isolated applications (e.g. Fedora SilverBlue + Flatpaks, Fedora IoT, macOS). These alternatives are far more familiar and easier to work with. You can get most (but not all) of the benefits of NixOS with far less work.

Worse is better.


I know Haskell, regularly write in rust for fun, and nix is by far the most complicated thing I've tried to learn hahaha. Glad you were able to figure it out, and seriously thank you for sharing your opinion of how you view it even after learning it.

I see some younger folks really investing time in it and it worries me. Sometimes smart people fall under this fallacy of "you have to be smart to learn this so it must be good because it's hard" when really it looks more like quicksand than asphalt. Quicksand is hard to get out of, but there's still no reward for falling in.


I recommend looking at already existing package descriptions for programs written in the same language you are looking for. Chances are, you can hit it enough times to make it package up your program as well.

I don’t think learning nix from its internals is worth it, especially not for beginners.


> I don’t think learning nix from its internals is worth it, especially not for beginners.

Nitpick: what you're calling 'Nix internals' are actually Nixpkgs internals.

FWIW, I think that to some extent this is just the way you have to go when you're trying to quickly take up a new (to you) configuration language for practical purposes.

I've somewhat recently started putting together some simple CI/CD pipelines at work with a tool called Dagger, which is a container image builder-and-runner based on BuildKit and the CUE language.

When I first started learning Nix years ago, I had a lot of packaging trouble with some quirky software that wasn't yet packaged for Nix, and I only ended up getting unstuck by asking for help on IRC. One generous person was hitting me with incredibly helpful links to examples from Nixpkgs and the documentation left and right, and I was astonished with their fluency. I asked them how they knew all this, and they basically said:

> Idk, one day I just sat down and started to read through some of the Nixpkgs codebase, and now I `grep -R` through it a lot to look for examples.

At the time it sounded adventurous and crazy to me, as I was pretty intimidated by the prospect. Eventually that became one of my standard tools: I treat the Nixpkgs repo as another source of documentation.

Fast forward to today and my Dagger implementation efforts, and things got immensely easier for me the moment I started treating Dagger like Nix. I cloned the upstream Dagger repository and started searching through the unit tests with `rg`, and all the answers to my questions about what functionality Dagger exposes via CUE modules were right there. Tons of the guesswork disappeared, even though I don't yet have a great grasp of the CUE language.

In some ways, the Dagger case is harder, though. In the name of simplicity and partly for other language design reasons, CUE is emphatically not Turing-complete. This is interesting and attractive in some ways, but one of the consequences is that sometimes you hit bottom in the CUE libraries to find some compiler directive that links in functionality which had to be implemented in a Real™ programming language (in this case, Go). So to really understand, e.g., the custom data types Dagger uses for secrets, you have to go read code in another language.

By contrast, in the Nix case, the language is powerful enough that you can learn everything you need to for even very advanced Nix usage only by reading Nixlang, either in Nixpkgs or in other examples. You never actually have to dive into true 'Nix internals' (the Nix repo, which is C++ code) to figure out how something in a Nix example is achieved. With a more restricted DSL, you sometimes do.

Maybe one valuable 'missing manual' would be something like 'How to read Nixpkgs'! It's such a rich source of examples. I think making earlier reference to it could help a lot of people make faster progress and feel more effective when learning Nix.


I would love to see a "guided tour of nixpkgs," using real-world packages as examples and building up in complexity as you go.

I feel like so many of the problem I encounter as a new nix user have been neatly solved already and there are probably tons of great examples in nixpkgs, but I have no clue where to look, or even what to grep for.

It would be super helpful if there were a resource with examples of common problems and how they're solved in the real world. For example, say you're packaging something whose unit tests hit the network. There could be a link to a package definition in nixpkgs that overrides the check phase to disable the problematic test.


When anything hits the network, there is nothing to do besides

- disabling the code that hits the network, whether by patching or by configuration ("--disable-downloads" or anything similar)

- or emulating it; e.g. if the script downloads a file, we can `fetchurl` it and move the downloaded file to its expected place. 9Of course, it works when the script does not override the downloaded file; otherwise, go to the previous option)

Two examples came to my mind:

- Arcan vendoring required patching its cmake scripts, because cmake does not honor the cache and tries to download things anyway:

https://github.com/NixOS/nixpkgs/blob/901978e1fd43753d56299a...

- cardboard didn't require it because meson honors the cache (and Nixpkgs configures Meson to not download anything):

https://github.com/NixOS/nixpkgs/blob/901978e1fd43753d56299a...


I think the first step to solving problems is acknowledging them: lazy? cool. duck-typed? cool. weird thunk stack traces, eh, workable.

Lazy and pure and duck-typed and broken ass stack traces? Fuck that. Fuck that in particular.

Eelco fucked up. Badly. nixpkgs uses Haskell type signatures as comments in the core ‘lib’s, because duck-typed pure lazy functional is fucking insane.


I agree - Nix, the language, really is at the root of many of Nix' problems.

The bad error messages, the useless stack traces, and the overall un-debuggability of Nix are not just minor issues. These are at the core of the steep learning curve of Nix. These are also the reason why with Nix you are often 'dead in the water' and don't know how to fix a problem, and end up googling for hours or asking somebody else who knows more about Nix.

And these issues cannot be fixed by having "better docs" or a better tutorial.

If somebody were to rewrite the language today then my suggestion would be to make it statically typed, like Haskell. I would probably also make it strict, with optional laziness, although that's less important imo. And providing good error messages and good debuggability should be explicit design goals.

Making it statically typed would necessitate a different overall design, since many of the dynamic tricks would not be possible any more, but I suspect that you would end up with a better overall design in the end.


I think you would just make it Haskell. I don’t know the exact chronology of Nix (the language), so I can’t comment with full confidence, but Real World Haskell (written by my utterly brilliant former colleague Bryan O’Sullivan /humblebrag) was published in 2008, and the Haskell described there has monad transformers and all kinds of nifty stuff. I think Eelco’s thesis was published around the same time? It’s possible that my dates are off and they really needed to rig up their own System F machine, but I suspect it just seemed cool and now we’re stuck with it.

Nix (the language) stomps like JSON or YAML for writing config files, it’s actually not bad at that, but it’s a disaster in its contemporary role.


2006, to be accurate.

https://researchr.org/publication/Dolstra2006

More recently there is an attempt to run Nickel as a successor for Nix: https://www.tweag.io/blog/2020-10-22-nickel-open-sourcing/


Oh thank God, I thought I was the only one. A while back, I posted "a lazy dynamically-typed language (in my book) perfectly combines the run-time reasonability of Haskell with all the development-time safety of LISP" on the Discourse, and the reply "I love the lazy functional minimal expression-oriented JSON-like packaging DSL that is nix" got 13 Likes. I thought I was going completely insane.


You’re not crazy. Duck-typing a lazy functional language in a zillion KLOC repo with broken stack traces is crazy.

At my work those “O RLY” meme book covers are in vogue this week, and my last one was “Nix Haskell Overlays for Fun and Murder: Homicidal Ideation for the Working Hacker”.

It was my job to get Clash (the VHDL/Verilog synthesis tool on Haskell) building under Bazel via Tweag’s rules_haskell and playing nice with all our other Haskell shit. That’s 14 hours straight I’m never getting back.

But… because it’s Nix I never have to do that again. Even if I upgrade everything else I can keep nixpkgs at that version in my flake.lock and that will always work forever. So I’m cool with it.


> But… because it’s Nix I never have to do that again.

This is the killer feature for me. My use-case for Nix is vastly different from yours, as I use it for centralized declarative configuration through NixOS, but I have an immense sense of satisfaction knowing that hell can freeze over but I'm still only a single command away from my exact setup running on (almost) any machine on the planet.

This might not matter much for most people, but making my machine work for me is something that took me years, and I can sleep well knowing that it's finally safe under Nix vs. multiple different configuration files for multiple different programs scattered across my filesystem.


Our use cases might not be that different. Whether it’s Mac laptops, physical dev machines, or cloud instances, our domain/scale is such that our machines are “pets” rather than “cattle”, i.e. they have names.

Making NVIDIA drivers work with either Xorg on someone’s desk or CuDNN on the GPU boxes with like two friggin lines in the NixOS stuff is like, doing drugs or something in terms of sheer feeling great after years of chasing that shit all around Ubuntu’s broken-ass standard of living.


> Nix Haskell Overlays for Fun and Murder: Homicidal Ideation for the Working Hacker

good god pls share the link


Haha sure. I don’t know anything about image hosting on the fly so pardon the sketchy ass link: https://ibb.co/Dr5Jsrx


> These alternatives are far more familiar and easier to work with. You can get most (but not all) of the benefits of NixOS with far less work.

> Worse is better.

There are some potential alternatives out there that seem inspired by Nix but which are less strict. I think a system that offers and manages Nix's purity but doesn't require it could end up the final winner some day. Shea Levy had some ideas for eventually achieving this in Nix that he called 'Nomia' (probably riffing on the name of Sander van der Burg's Dysnomia, I'd guess), but his life circumstances changed before the project really materialized. But Nix still could be the place where that is first achieved.

Meanwhile new container image-building DSLs like Dagger¹ and Bass² look a lot like a Nix which is impure by default, and bash builders are replaced with BuildKit directives. And there's Denxi³, which looks more like a Nix that lets you reduce its purity when you want to.

For now, though, the Nix ecosystem is still vastly more capable than these alternatives because of all the incredible work that Nixpkgs represents. Nix we know it today could still end up the concrete basis for what succeeds it, and not just an inspiration.

1: https://dagger.io/

2: https://bass-lang.org/

3: https://docs.racket-lang.org/denxi-guide/


My experience was seriously different. I started with Slackware, switched to Arch and after a failed attempt to run some AUR scripts, I was tired and switched to NixOS.

Incidentally, I treat both my Emacs config and my NixOS and home-manager configurations as Igors. They are never really finished. In this sense, the "many months of effort to get to that point" does not matter. Also, with NixOS I feel I can be bolder than with other distros, knowing with way more accuracy what is breaking my configuration.

> Yes, Docker is an ugly duck compared to Nix, but it brings 90% of the reproducibility benefits and probably everyone on your team can be up and running in hours.

The Docker replit was using before Nix was going hopelessly "uglier": https://blog.replit.com/nix

Also, what Cargo and NPM and `<insert lang here>` lockfile system do for each lang, Nix does in a more general package management level.


I’ve religiously updated a new-linux-install.sh. The majority of packages I build from source, and the remaining I use the distro’s package manager. I’m well aware that nix is a very different beast, but in practice this gives me basically the same outcome. Nix is incredibly interesting to me, but all the painful stories have kept me on my very simple and long bash script.


Everybody that sold nix to me said the goal was to replace the complicated side effect mess that is our current stack with something simple and pure.

As with FP, eventually, practicality beats purity I guess.


> As with FP, eventually, practicality beats purity I guess.

Until you can take enough advantage of purity that it's practicality is higher.


Probably not worth it. Not trying to be dismissive but genuinely, it's fine to have side effects and if it saves 1000 hrs of engineering over the life cycle of a project I'd wager it's completely worth it.


The keypoint here is that nix requires a larger up front investment. But once done it will save you 1000 hrs of engineering in the long run. It's absolutely faster to start a project without nix and just start coding. But once you have many deps, multiple developers, ci, multiple environments nix becomes a no brainer.


But the big problem in the end will likely be that you have no idea how reproduce the setup from scratch again


It's not just you, the user experience of nix is terrible. It's biggest feature of "pure" builds is mostly unnecessary given the complexity. It's runner up feature of configuring builds with a few scripts can also easily be accomplished by simpler tooling...

I don't get why it's necessary or unfortunately why it is so complicated... Anything that can be done in nix can be done in a tenth the time by any employee in a more commonly used technology. In industry, this is a very bad thing in my opinion


The user experience of Nix is terrible. I’m not sure that it’s even fixable.

But Nix or something like it is going to take over the world.

Dynamic linking by default is absurdly stupid and mostly motivated by GNU politics. It’s a terrible problem. Many (most?) of the users of Docker don’t even realize that this is the problem Docker is solving for them. But they know they have a terrible problem and Docker helps a lot.

Nix is Docker on steroids. It’s Docker that got bitten by a radioactive spider.

Linux namespaces and cgroups and BSD jails have been around. Docker made it Just Work.

When someone does that for Nix, which is basically Docker done by computer scientists, it’s game over for anything else.


You can use Nix to build OCI(images) and run them on Kubernetes if you want to.

Nix is a souped up package manager, Docker is a container runtime.

Nix depends on packages existing in /nix Docker chroots into a "folder" and runs a command(+many more things).

Lets not mix technologies up for the readers too much.


I’m well aware that Nix can produce container images. Xe has a great post about it.

People use Docker for a lot of reasons, but mostly? Same Dockerfile, same outcome, mostly every time. No one is moving /usr/lib/x86_64 around under you. It’s a real sea change, on the order of revision control: we hadn’t even realized that we were living with constant low-level anxiety that someone was going to break our computing environment at any moment. “sudo apt upgrade —whatever”, eh, maybe next week, we’ve got a release coming up.

Calling Nix a souped up package manager is like technically correct maybe?

It’s ‘git reset —hard HEAD^’ for your whole computer or fleet of computers. It’s utterly fearless experimentation, it’s low/zero runtime cost isolation and reproducibility.

It’s early days ‘git’ for systems: pain in the ass to learn and use, frequently and credibly accused of being too hard for mortals, but profoundly game changing.

Whether Nix per se remains the plumbing, someone is going to do good porcelain and end DevOps as a specialization, along with Docker and Canonical and mandatory glibc nonsense and a thousand other things that have overstayed their welcome. Disks are big now, we can have a big directory full of hashes. We can afford the good life.

It’s going to be a big deal.


Nix is a programming language/system for software packaging. Along the way, it turns out OS configuration is also just packaging... As long as you don't have anything too dynamic.

It is the most sophisticated programming language/system for software packaging on the market today.

We use it for all our reproducible software development environments. All of our project templates. And all of our employees are issued NixOS configurations so everybody on the team has the same NixOS, the same Nix package overlay... Etc. This level of consistency ensures that we get a level of reproducibility from hardware to OS to user profile to development environment to CI/CD that just doesn't exist anywhere else atm.

It's not entirely perfect but that's not really Nix's fault. It's actually the fault of the rest of the software industry that persists with bad tech. All of Nix's complexity stems from having to package everybody else's mess. Anyway one day I hope the CI/CD systems in the world will just provide Nix jobs directly instead of proxying through a docker container image that I have to setup atm.


Your shop sounds pretty ahead of the curve. “It works on my box!” “What do you know, mine too!” “Production person over here: I’m getting the same thing!”

It seems to me that people who go through the agony and ecstasy of getting it set up do so because the difficulty of the domain is high and the headcount is low: you need leverage in that setting.

Can you share anything about what motivated your group to get it dialed in?


My company matrix.ai was working a new cloud orchestration platform and Nix was core to how customers would package their applications/containers for deployment. The OS development is halted for now while we are working on a secrets management system.

So it was only natural to fully dog food Nix. We also introduced it to clients during our computer vision machine learning consulting work. It was the only way to get reproducible projects involving a complex set of Python dependencies, Tensorflow, CUDNN, CUDA, Nvidia libraries (there is a very strict set of requirements going all the way to hardware). I actually first tried doing it with Ubuntu and apt, it did not work. Setting up your own nixpkgs overlay is a must in these scenarios.

It is definitely something that is easier to fully dial in when you start from scratch. It's a comprehensive system so it will take time for adoption. I always recommend starting with it as a development environment tool first, then consider automating your OS conf or user profile or VMs... etc.


What was the orchestration system used for? Was it in the case where there were many models that needed to be run one after another. I know it's a huge problem in video processing to be able to increase speed a ton. My company Sieve (see profile) is building infrastructure specifically for running ML models on video which is why I'm curious.


It was built for AI driven container orchestration, configuration synthesis from high level constraints.

Yes ML workloads is particularly complex, because they have both batch oriented data flows (training), and service oriented data flows (inference). There aren't many systems that can adequately express both.


In the more philosophical sense I think you're right. I've been using NixOS for 2 years now on all my machines* but I still don't know the language.

I use it as a stable base system, but when I need to do something that I'm unable to do with Nix I still drop into a $DISTRO container and do my work there where everything is stateful and "disgusting".

Having worked with DevOps for awhile I can happily tell you that we don't run around building Docker containers all days, at least in my team the developers do so. We provide them with a base to build upon, provide CI and stateful services like databases and storage.

I spend an awful lot of time writing Terraform and Helm charts though, since things needs to run somewhere.

But yes, the immutable nature of Nix is great. Nix was VERY helpful for me when I switched GPU's from NVIDIA to AMD on my desktop, no screen? Reboot, reconfigure, retry.

But yes, I agree. Something that resembles Nix will take over the computing world, as someone said somewhere in the comments of this post. The great thing about Nix is that when you fix something it stays fixed. Even if it was a PITA to get there.


I believe that we’re in violent agreement :)

I used the exact same words “things stay fixed” elsewhere in the thread. Which is big psychologically. I’ll do some really difficult or even painful things in good humor if I get a lasting benefit. But if someone or something is just going to yank the rug underneath me next month? Fuck it, hack up some doofy shell script and call it a day.

Nix aligns the incentives on getting things right. I’m happy to learn the arcana of some weird corner case of some tool or service if I can apply that in a way that is permanent.

Now we just need someone to unfuck the UX nightmare :P


> Same Dockerfile, same outcome, mostly every time.

Uhm, no? Dockerfile has tons of side effects:

- doing `apt-get update -y`? On some machines it will run, on others it won't be due to caching. - Using `FROM` that isn't locked to sha256? Well, sometimes you will get version 1.2.3 sometimes you will get 1.2.5. Sometimes a new one will get tagged with the same tag. - It literally has network access during the build, unless you include a hash of what you're downloading, there is zero guarantee it will be the same download.

I think the majority of leaf containers rarely get the same result with the same Dockerfile. The only thing that is guaranteed with docker is that the same image will be the same image, but ensuring that different machines pull the same version of an image is another story.


I’m being a bit generous to Docker in the above comment: I believe that this is what users are hoping to achieve and getting closer to achieving than they would otherwise. Docker is basically a roundabout way to get static linking behind Drepper’s back. Almost no one is using it to bin pack such and such CPU/RAM to the request serving process and the batch processing process given finite SKUs.

Modulus the absurdly high barriers to entry, Nix is trivially better for this purpose.


I don’t think mixing up the two was parent’s point at all. He/she just made an analogy of where Nix currently is vs where should it go to “take over the world”.


I think we’re used to tools using convention over configuration more and more, which is great, but as nix is trying to solve a different problem it is all about configuration, which makes it really hard to understand. Maybe it would help to have a GUI or a local webapp to sort of show the state of what you have and debug things… not sure.


I don't see how that addresses OP's problem, which is poor and confusing documentation (I felt the same too). Everytime I hit a snag and searched for an issue, there were never a comprehensive and centralized source of solution, rather a bunch of pages each with different holes in information sprinkled with outdated advices. I share the feeling that I was reverse engineering things that should just be plainly documented.


Agree that documentation could be better, but as someone who has spent reading a lot of it I was still left with a sense of “how are things really glued together here in this nix project?”


> convention over configuration

Hmm I guess the nix approach would be "convention as shared configuration"


I'll translate. It's utter garbage bullshit.


> enables conditional generation of 3D scenes from different modalities like text or RGB images.

Please help me understand few dumb questions I have.

- What exactly is used as an input to generate such scenes is it just few pictures or even text description?

- Is it able to generate data for something which was not in the input? Like you have some common object in the corner of your photo and its able to expand the picture as if you had it in the frame in the first place?

- What is the end game of technologies like these? Could it be one day fed lets say every piece of data google has about the world like every 360 picture, every book, article, video, movie and so on allowing you to take picture of something and spawning infinitely walkable world looking and behaving as our reality? Similar to procedurally generated video game map.


i think this takes a scene, pictures or videos and reconstructs a 3D scene where it recognizes entities.

i dont think so? it just reconstructs the space it sees but it could absolutely expand to fill in the gap so to speak.

robotic navigation and manipulation with environment would be my immediate guess. It would be able to build a complete 3D version of the world and recognize objects. Your idea could be a reality here as well.

CVPR 2022 was a very interesting year for 3D scene reconstruction. One particular paper I recall was reaching into a database of CAD objects and simply replacing the scene with those objects that fit very close to what is shown in the scene. It could mean that a robot armed with this type of computer vision could manipulate with every single object it sees and know exactly how to interact with it without further examination.


One of the possible explanations is maybe whenever you typed "gender" it suggested one of the two possibilities next to it or some kind of binary type which might be something github wants to avoid but I am just speculating.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: