Hacker News new | past | comments | ask | show | jobs | submit login
GitHub – nushell/nushell: A new type of shell (github.com/nushell)
763 points by axiomdata316 on June 16, 2021 | hide | past | favorite | 398 comments



Thinking this is so cool! I was wondering if seriously buying into an alternate shell like this would be worth it in the long run.

Would really love to read from people who use(d) an alternative shell, both success and failure stories.

I cannot shake from my head the idea that buying into a non-standard shell will only work for personal projects in one's own workstation, but it won't fly too far for company work because you'd then be introducing an implicit dependency on your special snowflake non-standard runtime. Which is something we came to accept and assume for interpreted languages (Python, Ruby, Node, etc.) but I'm not yet ready to assume for shell scripts.

So right when I was going to test Nushell, I discover there are also other shells like Elvish [0] and Oil which supports structured data [1]. Now on top of my hesitations I'm adding decision paralysis :-]

[0]: https://elv.sh/

[1]: https://www.oilshell.org/

Lots more shells: https://github.com/oilshell/oil/wiki/Alternative-Shells


I wouldn’t use a custom shell in company work, but I’d use it for company work. I treat custom shells like a custom keyboard or shortcuts in the OS. Shells can help a lot in your day to day command line ergonomics, like presentation (always know which branch and directory I’m on, time and time taken for commands) and ease of use (autocomplete, aliases, shortcuts). But I wouldn’t script into them for professional work. Just plain bash in that case. We really don’t want devs fighting over which shell to install on the servers, but everyone can go crazy on their own laptops.


I agree... and any well written script will have a shebang that specifies the shell that shluld run it, anyway.

For me, personally, I love having homogeneus systems put in place. In this case it means not having to switch my mind between in and for modes. Helps me avoid mistakes, having to keep less context up here. But it's always interesting to read about other people's way of working.


If you want to make the systems homogeneous, then you need homogeneous teams, too. Maybe some team members prefer fish, others prefer zsh, still others prefer bash. Your suggestion means that everyone has to use the same shell for their command line.

I find that it is better to allow each team member to have their own working environment: different (command line) shell, different editors/IDEs, different keyboard shortcuts. But everyone uses the same programming languages (counting shell as a programming language for scripts here) and the same build tools and the same linters.


no no no, I meant that I prefer using Bash everywhere, including my own machine, because anyway I'll need to end up translating most stuff I write to a Bash or even POSIX compatible script so... as someone said in other comment, why not cut the intermediary step. It makes it easier for me, the less context switches the better production code I'll produce.

But of course, I believe others should be free to make their own choices! As long as their choices don't end up affecting the quality of the code that ends up shared with the rest of the team...

Which brings an interesting point: I don't have a problem at all with you using whatever shell you want. But I do have a problem if because you don't use Bash in your day to day, you are less used to its ins and outs, you don't put in the extra work that is being as proficient with Bash than you are with your personal shell, and your scripts end up with bugs in how Bash arrays are treated (just a quickly made up example)


I understand your point about being able to write proper bash scripts. On the one hand, I never use Java as my shell, but I am able to write proper Java programs just fine; why should bash be different?

On the other hand, I've been using Bourne-ish shells (mostly bash with short bouts of zsh) for twenty-plus years and I have no clue about arrays. (I know the difference between "$@" and "$*" (including the quotes), but that's it.) Now I have been using fish for a year or so, and I also have no clue about arrays in fish... :-)


Yup. There are more shells than there are IDEs, keyboard switch types, keyboard layouts, and operating systems. Shell ideological wars get just as bad as emacs/vim and linux/Mac. Use whatever you want, but company IP is shbang'd to bash.


> shbang'd to bash.

Good.

Better: #!/bin/sh

Best: #!/bin/sh and also not assuming bash anyway!


I wouldn't say putting a sh shebang and assuming bash is better. With a bash shebang at least you're honest about it.


Well yes, absolutely. I phrased that poorly, what I meant was like '/bin/sh is better... Especially if it actually works' or something.

It's easily done though, I've often thought I've written something POSIX compatible and then had shellcheck say nope.


Best would actually be

#!/usr/bin/env sh

Since this binary is required by the POSIX standard to exist at that location.


Oh and how /bin/bash is bash3 on macOS instead of the contemporary bash5 by default. And how sh is dash on Debian, or Ubuntu, I forgot. Some of the reasons why I would prefer python3 scripts (or perl) over sh or bash


Pretty annoying to reliably #! to python3 as well. Is it /usr/bin/python or python3?


That's my worst. I think generally python3 is safe (if unnecessary) today? But for how long do we have to keep that around?

Can we now say Well python2 is EOL so python should certainly be v3+?

It is of course a more general issue with shebangs that it's a very loose 'dynamically linked' coupling, not necessarily judt affecting language version but environment/libraries too, it just seems particularly problematic or prevalent with python.


/usr/bin/env python3


That's not the point - 'or /usr/bin/env python?'



If we're distributing software, maybe? Personally we control the hardware, so we know what we're running, but yeah, for libraries or packages this makes sense.


I'm generally supportive of the effort to replace the archaic shells that dominate the unix landscape, but unfortunately none have yet met my personal desires.

It seems like a lot of the developers of these shells create them to supplement the common unix toolset, but I really want a complete replacement. Ideally, the shell itself would be a completely dependency-free static executable, with a built-in command set that includes functions to interface with all the kernel's user space accessible features, acts as a repl, with sensibly named commands, and a simple method for interfacing to and from any other program. In short, I guess I'm looking for something to act as the basis of a completely new environment on top of Linux. As it is, most of these do not even meet the first of the criteria.


FWIW, nushell is much more of this than it appears. While many commands share names with Unix commands, that is the only similarity for quite a few, since they are rewritten to return structured data instead of text.


From what I recall about nushell, it only scores 2/5 of my criteria.


Right, it’s still very early and something needing to be contributed to. However I have spoken to the creator (regretfully I haven’t had as much time to contribute as I would’ve liked); he is very open to improvements and the direction of the project is pretty open.


So I take it you're waiting for the systemd shell then? lol


I tried fish for a while. The differing syntax between fish and bash for fundamental things mean I was running all my scripts through bash anyway, and if I wanted to share them/use them elsewhere in the company it had to be bash, so I swapped back to bash.


If you're running scripts, what difference does it make that you use fish? You're not really supposed to write scripts for fish, unless it is meant to be used be the fish shell itself. A shebang on the top of your script will invoke the script with the right executable, and whether you called it from bash, zsh or fish would make no difference.


So, instead of just typing shell commands, you type something else, but as soon as you need a shell script, you have to go back to shell commands.... Why not just cut out the middleman?


Answer to the first question: yes. Answer to the second question: because bash is a rather poor terminal shell.

In my experience, the middleman you mention doesn't exist. Or, if anything, it is a very tiny man that hardly gets in the way. It's either a one-liner with a syntax change you can pick up in 5 minutes, or it falls under any combination of these:

- might as well belong in a script

- the complicated stuff is in awk or the likes

- you can just enter bash for any copy-pasted bashisms

- you can execute it explicitly with bash -c

In any case, none enough of a hassle that it could demerit the immense productivity benefit I've gotten from fish. The only regret I have on that matter is not having ditched it sooner. Use bash to run scripts, or even better, sh. Use a different shell to make your life on the terminal better. Doesn't need to be fish, zsh is pretty good too.

If I can offer any advice: if you still want to stick with bash, at the very least take a look at fzf, https://github.com/junegunn/fzf , aside from fish/zsh it's the best and most lowest hanging fruit.


I personally use zsh, bash is too primitive with regards to its user interface (line editing, searching, vi keys, etc.) The syntax however is my key point here, I often take my commands and smack them into Makefiles for example.


> You're not really supposed to write scripts for fish

Given that 95% of my time in a shell is running a script, a shell that doesn't do that well isn't a great fit for me.

> A shebang on the top of your script will invoke the script with the right executable,

Assuming the person who wrote it had the foresight to do so. That person isn't always me, and if I have to manually check if each script I run has a shebang, I should just default to running them in bash.


I haven’t seen a shell script in ages that was missing the shebang. Is this common in some places?

I’ve started to write dash scripts because that seems to be reasonably posix and bash on macOS is ancient.


I don’t think I’ve ever seen a script written without the shebang.

Btw, you can get around the ancient bash in macOS by installing a more modern bash and invoking with /usr/bin/env bash as your shebang. Not sure if that’s a great idea, mind you, but if you’re already writing scripts in something non-portable, it’s an option.


There are hundreds of sh files on github that don't have a shebang. Just because you don't write them and your colleagues don't doesn't mean they don't exist in the wild.


I didn’t mean to imply that they didn’t exist nor that it might not be important to account for them in some contexts. I meant only to convey my own personal experience: that it had never occurred to me that I might need to account for a missing shebang, since providing an interpreter for a script via shebang is all I have ever personally encountered.


wait how does the OS even invoke an interpreter without a shebang? If you have to explicitly call an interpreter when you invoke it, then it's no extra work to call

    bash some_script.sh
anyway


As I’ve learned in response to this thread, apparently some (but not all) Unixy OSes will run a file marked as executable using /bin/sh if no shebang is supplied.


Oh right, I think that's part of how APE[1] works. But I thought the Thompson shell was old and weird, and that it might not support what people use when they write ‘POSIX’ shell scripts nowadays.

Anyway it sounds like in that case the OS _doesn't_ know how to run the script, it just assumes `/bin/sh` will do.

I always assumed that scripts without an interpreter line were like snippets to be invoked via `source`, from some shell script with knowledge of the context in which it should be used.

Anyway, if that's the way it works, the behavior actually isn't a problem for users of Nushell or Elvish or Fish or anything else, since their executables are never installed to `/bin/sh` anyway. Nushell probably gets installed to `/usr/bin/nu`, Fish to `/usr/bin/fish`, and `elvish` can be installed anywhere. So it's still not clear that there's a real problem with un-shebanged scripts when it comes to using a boutique shell for login.

1: https://justine.lol/ape.html


Yeah thing $T _might_ (but probably won’t) happen, so the idea is automatically terrible.


Any random blog post that you find with instructions to run something in a shell that uses a variable just won't work copy and pasted, you now need to either modify them for your esoteric shell, or wrap them in a script to be ran through bash


I would consider a person who doesn't use a shebang to be bad at their jobs.

Even the most basic of tutorials will include it even if it means nothing to the user.


That's not entirely weird to me.

For the longest time bash was in various states of disrepair on different Unixes. I could make it segfault pretty easily.

I wrote scripts that had to run across Linux, BSD, HP-UX, Solaris, and AIX at the time. So we relied on a ksh88 implementation on each. Whether it was AT&T, mksh, or pdksh, it was fine. Couldn't rely on 93isms, but basically anything POSIX was cool.

I used ksh93 interactively for the longest time. But then Linuxes stopped testing interactive use w/ their packages and it became unusable (looking at you RedHat). So I used bash interactively but still wrote those ksh scripts.

These days, I use zsh locally on MacOS and BSD, but mostly write bash scripts for those and Linux. I still stick to the POSIX habits I have for portability. But the variances are so less. The only one that annoys me is the "does sed -i do in-place" game.

I don't think my interactive shell and target scripting shell have ever matched 1:1 in feature set. And what I slap together interactively or for 1-off script, I would never do for a "real" script. And that's a thing that bugs me with a lot of shell scripts I see in the wild.

A "real" script should make very little assumptions, fail safely, be low on resource utilization to not risk impacting a business workload, and be maintainable. I write everything real like it might be run thousands of times accross thousands of hosts. Even when I didn't intend for it to be, I've caught it in the wild because someone borrowed what I did.

I do love shellcheck, because it teaches our newer folks good habits and they're used to that kind of feedback in other languages. It catches so many things that make me cringe. It's not perfect, of course, but pretty great.

When I started out, I worked 9-6 and the Unix sages in my group worked later hours. So they'd walk through my work on the RCS host and I'd come into the morning with a critique in my email. If I didn't understand, I could hit their desk after lunch and they would patiently explain to me. I love them for it, but it is nice to have tools without a 24-hour feedback loop.


Shellcheck is nowadays a must for writing decent shell scripts. it is like a centralized knowledge base of all those brilliant minds that did the sanity checking for you in the old days :)


It boggles my mind that people use a language where

x = 0 doesn't work and x=$y is a security flaw


Sometimes it's easier to use tools you know will be available than to ensure your tool of choice will be available everywhere you need it to be. JavaScript, esp. pre v6, is another example that makes almost no sense when you ignore its ubiquity


> x = 0 doesn't work

Maybe it's just me, but I love how the assignment syntax can be used just for a single command: 'MANPAGER=cat man' is a lot nicer than '(MANPAGER = cat; man)'.

> and x=$y is a security flaw

When is variable assignment a security flaw?


if $y is untrusted input then you need to quote it or you've introduced shell injection attack to the script.


You don't need to quote variables on the right hand side of an assignment.


I used fishshell for years, and the bottom line ended up being that it created enough mental overhead to be a burden because it diverged from the UNIX norm too much. Sort of like the colemak keyboard layout, sounds great in theory, in practice, not so much.


Fish user here. I've accepted having to drop down to bash here and there. No big deal for me.


I tried fish for a short while too and quickly realized everything depends on bash/dash. I then switched to zsh which is mostly compatible. I installed Oh My Zsh and a fish-inspired theme with line autocompletion and coloring and never looked back. But I still use zsh for the interactive prompt only, still writiiing #!/bin/sh scripts for maximum compatibility between platforms.


I went through the same thing.

It also got on my nerves that scripts which did not identify themselves correctly as bash would run in [ whatever hot new shell im running ], often with bizarre results.


I don't know if fish counts as an alternative shell since it is still pretty close to bash and zsh, just with better defaults requiring less configuration (similar to i3wm with respect to "better defaults").

I just use it as interactive shell and for scripts I use `/bin/sh`. So I don't really run into a lot of problems, only sometimes with scripts that do not have a proper `#!/path/to/shell` declaration at the top.

("better defaults" is of course entirely subjective but for fish, i3 and Doom Emacs they mostly align with my preferences)


I’ve been poking around with Sway, coming from Gnome (which I mostly love). What got you into i3 vs a normal desktop, and how long did it take you to realize it was a good fit?


Phew, I've been using tiling window managers for 15+ years now so I don't really know. I've always been interested in trying out different window managers and UI/UX concepts.

When I started with Linux in the 90's it had FVWM by default as graphical interface which was pretty different when being used to Amiga, Atari ST and Windows 3.1. (They were all pretty different from each other.)

Of course I had to run Enlightenment with some rusty graphics when I discovered it, but I think WM2/WMX[1] was the first window manager where I realized I preferred the more minimalistic ones.

After trying out more WMs I ended up at awesome[2] which I ran for years. It had the best support for both floating and tiling windows. (I do like some floating windows now and then but never liked the "normal" desktop concept which is, paraphrasing Fred Brooks, more of a flight seat concept.)

Then WMs like xmonad started appearing and I tried those out but they're pretty much "tiling only" and I did not like xmonad's tiling windows approach. It's been too long since I used it and I'm not sure it is still applicable but it forced you to use specific layouts and the current window was always maximized.

So at some point i3 appeared (like fish shell and Doom Emacs) and when trying it out it by default already did most of the things that I liked and that I had to configure in other window managers, so that was nice. It was a pretty good fit right away.

I have used some other WMs like EXWM and StumpWM over the years because I'm both an Emacs and Lisp weeny but they're like less mature i3 clones so I've been back at i3 for a while now.

[1] http://www.xwinman.org/wm2.php

[2] https://awesomewm.org/


I've been using Xonsh[0] for a couple years as my daily driver and I swear by it. I'm also a Python dev so that's part of the appeal. It's billed as the "unholy union" of Bash and Python.

[0] https://xon.sh


Xonsh is also my daily driver for years and years. Combined with direnv I have really not missed much of anything from other shells. So much stuff either works out of the box or has some little hacky wrapper to handle it.


Yup. I'm a huge fan of the vox plugin approach to python and venv management. No more virtualenv hacking the shell... it's a first-class citizen. Plays nice with pyenv too


This sounds nice enough that I'm considering installing xonsh just for python development


use it as an interactive shell first, not for shell scripts.

also, why would any dependency of your projects at work be implicit? all dependencies should be captured somehow, the very least in some documentation.

at our company, we use shell.nix files to declare all of our dependencies, so everyone has the exact same software available on their development workstations.

it's not a completely trivial thing to do, but worth investing into it, to lower the chance of surprises, which will happen when you can afford them the least...

here is an example to get started:

``` with import (builtins.fetchTarball { name = "nixos-unstable-on-2021-05-16"; url = "https://releases.nixos.org/nixos/unstable/nixos-21.05pre2893... sha256 = "1kilrk0ipvldf4rrn6aq6v2bj2jfhbz33cjb9w9ngqmbmr2f94wd"; }) { };

let project-jdk = jdk11; clojure = callPackage ./deps/clojure { jdk11 = project-jdk; };

in mkShell rec { buildInputs = [ coreutils cacert direnv nixfmt curl jwt-cli fd python3 nodejs (yarn.override { nodejs = nodejs; }) project-jdk clojure yaml2json

    modd
    devd
    overmind
  ];

  shellHook = ''
    export LC_CTYPE="UTF-8" # so tar and rlwrap won't complain
  '';
} ```

it shows how u would use your custom package (for clojure in our case) and how would u pin there state of the whole package repository to a specific point in time.

in the past I've used es, the extensible shell (https://wryun.github.io/es-shell/), this way, without bothering my colleagues with teaching them about it. they just git pull, press enter in a shell within the hour repo and direnv will pull down any new dependencies.


> also, why would any dependency of your projects at work be implicit? all dependencies should be captured somehow, the very least in some documentation. .... Nix.

Because distributing software is hard, especially if things change. That's one big reason why Docker images will sometimes get used for development environments (like what VSCode supports).

Nix is nice, and is a pretty good way of capturing those dependencies. But it's also difficult to customise, often lacking in documentation, and not used very widely.

If you're not using something like Nix (i.e. most people), it's natural for dependencies to be quite implicit.


I understand that the industry largely does not acknowledge how big of a problem implicit dependencies are, but since the OP is clearly aware of some issues with implicit dependencies, I was just surprised why so content with these issues. It clearly hinders the adoption of upcoming technologies at least in the mind of the OP.

I'm not really an expert in the Nix expression language. Hasn't even finished reading the Nix pills, which is a very practical documentation, yet I could manage to extract a lot of benefit from that shell.nix template I've shared. I'm using Nix for 3+ years now. Because of this experience I do encourage others not to be afraid trying Nix. It gets better with every release.

Docker is a halfway solution and because of its memory and storage requirements it often strains the average development workstations and internet connections.


Perhaps someone already mentioned this elsewhere, but nothing forces you to use only a single shell. A shell, after all, is just a running process. Yoh can even launch it from another shell.

As a GNU screen user already, I can imaginr having one of my default windows that load when I launch screen be nushell if I was so inclined. I can keep the rest as my default (e.g., bash or htop or whatever else I wany running there).


There is something very important that forces me to use a single shell: Habit.


I tried and returned to Bash. There is something about using defaults.


> I cannot shake from my head the idea that buying into a non-standard shell will only work for personal projects in one's own workstation, but it won't fly too far for company work because you'd then be introducing an implicit dependency on your special snowflake non-standard runtime. Which is something we came to accept and assume for interpreted languages (Python, Ruby, Node, etc.) but I'm not yet ready to assume for shell scripts.

Is it so bad? I run arch at home, I would never do that for a production server. I use rust at home, but it's unlikely to get introduced at my work any time soon. I use elm for solo projects, and I've never once convinced my managers to even try it (and believe me, I tried).

Context switching is pretty normal, I wouldn't let that get in the way of trying something new and interesting. Personally I find elv.sh the sanest scripting language I've tried so far, and I wouldn't trade it back in for bash, even if I'm still forced to use it at work.


A case for trying Elvish:

I really like Elvish's emphasis on interactivity, its somewhat maximalist approach to interactivity-oriented builtins (like fuzzy filtering, its directory browser, etc.), and the fact that it's distributed as a single static executable.

I also like the developers, who are smart and kind people and generally pretty pleasant to interact with on GitHub or their Telegram chat (bridged to various other things).

None of this is a case against trying Oil, however, so perhaps it will not help your paralysis too much. ;)

NGS' readme also has a good comparison section listing alternative shells: https://github.com/ngs-lang/ngs#have-you-heard-of-project-x-...


You might be interested in chubots comments as he regularly responds to questions related to oilshell:

https://news.ycombinator.com/threads?id=chubot


> Which is something we came to accept and assume for interpreted languages (Python, Ruby, Node, etc.)

Tangential, but:

- "Node" isn't a language, it's a JS runtime

- There's no such thing as an "interpreted language", whether something is interpreted or not is a property of the execution environment, not of the language itself

- The commonly-used runtimes for JS at least are JIT compilers, not "interpreters", and this can be true for some Python/Ruby runtimes too


I didn't downvote your comment. But I guess some people did, possibly because you did the thing about stating several facts where all were true but none were relevant. I'd swear I read that this phenomenon even has a proper name but cannot recall it right now.

(And for the record I know very well about the internals and technicalities about the words and names that I used: Node.js is the de-facto Javascript runtime for the backend, popular to the point that people commonly refer to it as "programming in Node"; Python and Ruby are just languages but their majority of users run their program with the default official interpreter, so through a metonymy we can refer to either even when actually talking of the other; and etc. For economy of the language and avoiding the pedantry, I grounded my comment on the most generally perceived notion about those technologies. You see, just to avoid unnecessarily long explanations like this one)


I use the fish shell and have for several years now.

A few years back I did a startup with a friend -- a cross platform mobile product but with a mess of complex open-source c++ dependencies (which we forked and customized often) -- and we made the decision as we hired developers to over-specify some aspects of the machine environment to make it as easy as possible to share our development workflows. Note that "development workflow" is much more than just "i can run the app in docker" -- but rightfully includes -- I can efficiently run things in debugger, I can run the performance profiling tools in realistic ways, I can quickly execute and test a change someone else is trying to make, I can quickly share a change I'm making with someone else and be confident they can make realistic observations about it, i can setup a fast repro loop for some specific bug or feature ... etc.

We did a pretty good job of getting the know now needed for all those workflows distributed across the team reliably with low effort despite some complicated and weird build system requirements associated with our dependencies.

One of the big bang for buck tricks that helped more than I would've expected was to mandate that everyone had to use fish shell. This really reduces the number of weird effects from random things that people copy/paste into their bash initialization -- with the developer never imagining the strange knock off effects that will one day cause a tool they don't understand to behave in a flaky way for a reason they will never figure out ...

The amount of terrible things that developers randomly accumulate in their bash initialization probably can't be underestimated ... there's probably more risk of "weird problems" associated with advice on stack overflow related to bash initialization than maybe any other topic ...

So I personally think standardizing a team or startup on an alt shell can be a potentially good idea ...

As another point of evidence -- I later headed devops at another startup that took opposite perspective -- "use your own machine and your machine can be configured however you want and you just have to figure out dev workflows on your own" and it was extremely difficult to share knowledge about how to accomplish a lot of these basic workflows ... doing anything beyond the most basic ways of interacting with the codebase often involved a rather large amount of hands on dev machine troubleshooting (can you send me your .bashrc and .profile and .bash_profile? ok type these commands ... ok now you can grab my branch and try this to have a look at the thing i'm trying to change ...)

fish is reliable and stable though -- I'm not sure about using a shell that's not been around for awhile in this way ... but I'd support it I think if I already had personal experience that the shell was "reliable enough" ...


The title is somewhat misleading and makes it seem like this is a GitHub company initiative rather than a project just hosted on GitHub.

I've not seen this title format on HN before. Could it be changed?


I think the "nushell/nushell" part makes it clear which repo it is talking about.


I didn't find it clear, I also thought it was a GitHub project.


I found it clear it wasn't a GitHub project.


The point of a title is to be obvious to everyone (to the extent possible) -- the fact that we found it obvious doesn't remove their confusion.


It wasn't clear to me that any of the interpretations was, well… clear, and had to check the comments (I could have checked the repo too).


Really seemed like a GH project to me.


Yeah i thought it was like "Github - Atom Editor"....


That's on GitHub in fairness. Left is logged in.

https://cdn.imgy.org/DOyx.png


I think just "nushell/nushell: A new type of shell" would be a clearer title.


I'm in that camp. I read the URL without issues


no.. I didn't find it clear..


The title is the actual value of the title html tag on the page


Heh, that's interesting: When you're logged in, the title tag changes to drop the "GitHub -" prefix.


SEO strikes again!


Considering people often search 'github project name' it probably makes sense.


It could go at the end just as easily like every other site though


I can see the confusion but it didn't confuse me. If it's a GitHub product it would only be a blog post I would expect not actual code ironically.


Same, I thought it was a GitHub project. There are often links to GitHub pages on HN and I don't think they generally have the "GitHub" prefix. It's not relevant anyway where the code is hosted.


I think it is very likely "disagreement" in this thread is simply based on people seeing different titles at different times. There are some possible causes:

1. The submitter may provide an initial title

2. HN moderators may adjust the title

3. GitHub may show different titles based on the user's login status (I'm not sure myself, I'm writing this based on what I read in another part of this thread)

Therefore, I make this request of HN commenters: Since titles on HN can change, please quote the exact title that confuses you.

More broadly, it may be useful to always quote the text you are responding to.


Right. It doesn't seem important where the project is hosted.


I got excited thinking it was an official Github project.


same here. I thought it is GitHub project


Not only is the place the project is hosted totally irrelevant to discussion, it seems like it would make more sense for this post to link the project website (www.nushell.sh/) instead of the GitHub repo.


dang will most certainly change this title when he spots it. I have been tracking HN title changes and they follow a very predictable format. Here are some recent results,

      Universities have formed a company that looks a lot like a patent troll
  was Universities Have Formed a Company That Looks a Lot Like a Patent Troll
  
      Linux with “memory folios”: a 7% performance boost when compiling the kernel
  was Linux with “memory folios”: got a 7% performance boost when compiling the kernel
  
      Strategic Scala Style: Principle of Least Power (2016)
  was Strategic Scala Style: Principle of Least Power (2014)
  
      The rise of E Ink Tablets and Note Takers: reMarkable 2 vs Onyx Boox Note Air
  was The Quiet Rise of E Ink Tablets – ReMarkable 2 vs. Onyx Boox Note Air
  
      Big tech face EU blow in national data watchdogs ruling
  was EU court backs national data watchdog powers in blow to Facebook, big tech
  
      Emacs Love Tale by sdp
  was Emacs Love Tale by Sdp
  
      Future from a16z
  was Andreessen Horowitz goes into publishing with Future
  
      Richard Feynman’s Integral Trick (2018)
  was Richard Feynman’s Integral Trick
  
      The Tinkerings of Robert Noyce (1983)
  was The Tinkerings of Robert Noyce
  
      NIH study offers new evidence of early SARS-CoV-2 infections in U.S.
  was NIH Study Offers New Evidence of Early SARS-CoV-2 Infections in U.S.
  
      Time to sunburn
  was It's surprisingly easy to get a sunburn
  
      The home computer as a cultural object is physically vanishing (2007)
  was The home computer as a cultural object is physically vanishing
  
      Mackenzie Scott gives away another £2B
  was Billionaire Mackenzie Scott gives away another £2bn
  
      A pilot program to include spirituality in mental health care
  was Psychiatry Needs to Get Right with God
  
      What we learned doing Fast Grants
  was What We Learned Doing Fast Grants
  
      Joplin – an open source note taking and to-do application with synchronisation
  was An open source note taking and to-do with synchronisation capabilities
  
      Show HN: Influence, a Go-inspired 1-minute board game
  was Show HN: Influence, a Go-inspired 1min board game
  
      J Concepts in SuperCollider
  was J Concepts in SC (SuperCollider)
  
      Joplin – an open source note taking and to-do application with synchronization
  was Joplin – an open source note taking and to-do application with synchronisation
  was An open source note taking and to-do with synchronisation capabilities
  
      Forzak: A website that curates high-quality educational content
  was Forzak: A website that curates high-quality educational content on the Internet
  
      DraftKings: a $21B SPAC betting it can hide its black market operations
  was DraftKings: A $21B SPAC Betting It Can Hide Its Black Market Operations
  
      We’re no longer naming suspects in minor crime stories
  was Why we’re no longer naming suspects in minor crime stories
  
      Operation Midnight Climax: How the CIA Dosed S.F. Citizens with LSD (2012)
  was Operation Midnight Climax: How the CIA Dosed S.F. Citizens with LSD
  
      Modelplace: AI Model Marketplace by OpenCV
  was Modelplace, the AI Model Marketplace by OpenCV
  
      Don't just shorten your URL, make it suspicious and frightening (2010)
  was Don't just shorten your URL, make it suspicious and frightening
  
      Kuhn's Structure of Scientific Revolutions – outline (2013)
  was The Structure of Scientific Revolutions
  
      How Indian Zoroastrians helped shape modern Iran
  was An Indian Religious Minority Shaped Modern Iran
  
      RFC for 700 HTTP Status Codes (2018)
  was RFC for 700 HTTP Status Codes
  
      RFC for 700 HTTP Status Codes (2012)
  was RFC for 700 HTTP Status Codes (2018)
  was RFC for 700 HTTP Status Codes
  
      How to Be a Stoic (2016)
  was How to Be a Stoic
  
      Researchers fear a scenario in which smart speakers feed sleepers subliminal ads
  was Are advertisers coming for your dreams?
  
      Google Messages end-to-end encryption is now out of beta
  was Google Messages end-to-end encryption no longer in beta
  
      Stabel: A pure concatinative programming language, now with modules
  was Stabel v0.2.0-alpha: A pure concatinative programming language, now with modules
  
      Recording of the week: A Yanomami ceremonial dialogue
  was Recording of the week: A Yanomami ceremonial dialogue
  
      Stabel: A pure, concatenative and statically typed programming language
  was Stabel: A pure concatinative programming language, now with modules
  was Stabel v0.2.0-alpha: A pure concatinative programming language, now with modules
  
      Technology Saves the World
  was Andreessen: Technology Saves the World
  
      Why bother with old technology? (2013)
  was Restoration of vintage computers – Why bother with old technology?
  
      What Happens If an Astronaut Floats Off in Space? (2013)
  was What Happens If an Astronaut Floats Off in Space?
  
      Legal expert says Tether and Binance Coin are likely picks for SEC lawsuit
  was Legal expert says Tether and Binance Coin (BNB) are likely picks for SEC lawsuit
  
      If you think psychological science is bad, imagine how bad it was in 1999
  was If you think Psychological Science is bad, imagine how bad it was in 1999
  
      Six charged in Silicon Valley insider trading ring
  was Six Charged in Silicon Valley Insider Trading Ring


Need some description up front, not in the 5th section ("Philosophy" https://github.com/nushell/nushell#philosophy). Some ideas:

"powershell for unix", "structured pipes", "pre-parsed text", "small, useful, typed tools loosely coupled."



Am I missing something cool by not looking into PowerShell for Linux? Anyone using it could chime in and share how it improves their workflow? What kind of tools does it bring to the table?


Disclaimer: I use PowerShell on Windows, I've never tried it on Linux, but I don't see why its advantages wouldn't transfer. That being said, I think it's main advantages are:

- Typed object streams, which gives you stuff like autocomplete, IDE support, and generally there's no need to muck about with sed/awk/xargs - Very fast, it's absolutely fine to read a file line by line, and parse it in a script, whereas doing the same in bash/awk is a ton slower than clever use of cat/grep/wc (and a myriad of other tools) - Due to type information being retained, it's a lot easier to figure out what a given script does, without having to know how the output of a given tool looks like, given it's particular command line switches.

This is just my opinion, but PowerShell is a lot more geared toward people like me, who only write a script once in a blue moon/muck about with the build scripts occasionally, while Bash is more geared toward veteran sysadmins who live and breathe this stuff.


Nice summary. It leaves me wonder, though: is the philosophy the same as in Unix world? as in, little independent tools that do one and just one thing. Would those tools also need to be modified to add compatibility with the same typing system that is used by the shell?

Or it follows the opposite mentality and the sell comes with batteries included?

I'm thinking of some arbitrary example, like downloading a JSON and parsing it to obtain some nested sub-property. In Bash world you would use the shell to execute at least something like `wget` to do the download part, and then run `jq` to parse the JSON and select the desired property. Both of `wget` and `jq` are independent programs, installed separately, with their own input & output channels, that you combine together by using the shell itself.

How would this work with PowerShell? (feel free to use some other example that might be better suited)


Powershell commands (called "commandlets" or "cmdlets") are little .net (or .net core) programs/methods which accept and return either objects or arrays of objects, rather than plain text.

powershell offers cmdlets which let you sort and filter these result objects based on object property values rather than sorting on plain text values.

Obviously when printing to stdout or stderr, THAT bit is plain text, but until that very last step of displaying in the terminal, powershell results are objects.

So, that gives you a form of type safety, which isn't strictly possible in text-only shells. Powershell uses reflection to inspect the objects since the exact type of a response may have never been seen before. You can write your own cmdlets which return your own types and you can modify types returned by cmdlets using operators like 'select'. So, it's type-safe but not like everyone is used to. Powershell cmdlets always return objects (or errors, I guess), but the structure of those objects isn't always known until runtime. You can still use those never-before-seen types with the standard operators like sort and select, too.

Powershell also offers output conversion cmdlets which let you output to CSV or a textual table or a list, which is helpful when the next step in your pipe chain is a tool that expects textual input or a csv file. One can also write their own output formatters, of course.

In those ways, powershell and nushell appear to have the same goals. I haven't looked at nushell any more closely than it would take to notice this, so there may be other similarities I haven't noticed, yet. I'm sure there are many differences that I haven't noticed yet, as well.


Thank you very much, it's now clearer to me how it works.

Regarding the objects themselves, the way you describe them makes me think of Javascript objects & arrays: you don't need to know the exact shape of an input object, as long as it has the specific properties that you want to work on. Same for arrays: regardless of their content, the language allows you to use the standard ops like map() or find() (and it's then a matter of you providing adequate functions to access the contained elements)


In powershell you would use built ins.

To download json over rest you would use Invoke-RestMethod or irm.

To convert json to objects you would use ConvertFrom-Json.

To select properties from objects you would use Select-Object or select.

Here is a concrete example that I wrote for the rustlings install script: https://github.com/rust-lang/rustlings/blob/main/install.ps1...

Note that I used Invoke-WebRequest instead of irm here.


It looks like: `curl <url> | jq .tag_name` Note: curl & jq are independent programs (as long as `curl` produce json text with tag_name in each, the command works, no matter other changes).

If ConvertFrom-Json & Select-Object were not builtin, how much information about each other would they need to cooperate? Can an unrelated change in ConvertFrom-Json's output break corresponding Select-Object command? How much are they intertangled?


Curl to python is actually what happens in the Linux install script [0].

To your point, all cmdlets adhere to an interface so they always return and receive typed data. They don’t really need to know anything except that they’re receiving or giving powershell objects.

Non built in commands (native binaries) just pipe a stream of bytes just like any posix shell.


ConvertFrom-Json converts json text into a PowerShell object with properties derived from the json.

Select-Object (in this instance) selects a property from the object.

An unrelated change in ConvertFrom-Json wouldn't change it's basic nature which is to convert json into something you can manipulate with other PowerShell cmdlets.


And then I guess the shell itself has mechanisms to easily attach or register new commandlets, to allow for easily add custom object processing?

I really like the consistency that those commands have: ConvertFrom-<ConverterFormat>.

No built in converter for the format you need? I'm just speculating here, but I guess you just need to implement this or that Interface and there you go.


Yes you can write your own cmdlets in either C# or powershell that you can install via modules [0]

You can also just write a function in your profile.

[0] https://docs.microsoft.com/en-us/powershell/scripting/develo...


You can define your own function and use it in your pipeline yes. You can have your function take pipeline input just like the built in cmdlets.


I use it a bit on windows servers because it integrates well with other microsoft tools.

On linux...it seems both redundant and just plain weird for weirds sake. There are literally dozens of alternatives on linux that have a sane syntax and are likely preinstalled.


Powershell is a pretty neat scripting language, especially if you manage Windows boxes or have lots of .net stuff around.

It's not a good interactive shell environment though. The commands are way too verbose and the syntax feels clunky for keying in.


I disagree. Yes, the syntax is verbose so that you get sensical names instead of things like 'awk' and 'grep', but that isn't a big deal because a) tab complete[0], and b) user-definable aliases.

[0] tab complete works on cmdlets, switches, and variables.


And if one misses the short incomprehensible aliases, the common commandlets have those by default anyway. `cat` is `gc`, `grep` is `sls`, `ls` is `gci`, ...

Same for switches and parameters. You don't have to miss `rm -rf foo/`, it's just `ri -re -fo foo/`


I don't mind the verbosity at all, but PowerShell in my experience is painfully slow for interactive use, if you want the kind of niceness you can get with Fish, Zsh, or Elvish (fuzzy filtering, smart-case completion, any-order/substring completion, etc.).

The module system is also not great for distributing little configure snippets like Oh My Zsh or something, it's clunky and over-complicated for that.


> [0] tab complete works on cmdlets, switches, and variables.

Isn't this how modern tab complete works everywhere? bash 5.1 (the shell I happen to be running right now, and which I mention because I think of its out-of-box tab completion as pretty basic) tab completes variables.


I don't use it as a shell on Linux/macOS (I don't think it's even really supported yet), but as a scripting environment it's been ok: the language is sane enough, you can use the .NET BCL if you need to and its behavior is consistent across platforms.

We use it for automation in our .NET codebase (which is being developed on both Windows and Linux) instead of bash scripts because of that and the fact it's preinstalled in the .NET SDK Docker images


It's supported on Linux. It's one of the first packages I install on a fresh box lately.


What do you use it for generally ?


My default shell. Everything from manipulating XML to system administration. I'm a big fan of the object-based pipeline.

Quick examples:

- Accelerators for data and conversion tasks. For example, reading a file with `Get-Content` (or frequently in the shell, the alias `gc`) and parsing and converting into a traversable, queryable XML: `[xml]$myXml = gc my-file.xml`. Then you can go `$myXml.customers.firstName` or use an XPath query.

- To get the difference between two dates, `[DateTime]::Now - [DateTime]'2021-01-01'` returns a `TimeSpan` with the elapsed hours, days, etc. It parses the date string for you because of the `[DateTime]` before it.

- The cmdlets for system administration. For example, exporting processes that use more than 200 MB of RAM and their image paths to CSV: `Get-Process | where { $_.WorkingSet -gt 200000000 } | select ProcessName, Path | Export-Csv memory-hogs.csv`


I used PowerShell as my main shell for a few years (before WSL existed). My experience is that it's extremely tedious having to relearn every command 'the PowerShell way' if you already know your way around standard Unix commands (and I wasn't exactly a Unix guru either). When WSL dropped I switched and haven't looked back.

The only PowerShell tool I did love was posh-git [0]. Thankfully, someone has ported it to bash/zsh as posh-git-sh [1].

[0] https://github.com/dahlbyk/posh-git

[1] https://github.com/lyze/posh-git-sh


I don't think so. Although, there seems to be small trend, especially with golang utilities - to shift from text toward json - or rather a "worst of both worlds" - docker is a prime example of this, eg:

  docker network inspect webservers -f '{{ range.Containers}}{{.IPv4Address}}{{end}}'
Once tools start down this path, I think powershell, or any shell built around primitives that work on tree structures, make more sense (ie: rather than cut, awk, grep-tools for printing named fields etc).

Must admit, I am no big fan of ps syntax - too verbose for interactive use - too unfriendly for scripting..


If you spend all day immersed in PowerShell on Windows then it might make sense to leverage that familiarity on Linux.

Also if you are managing Azure from Linux or macOS.


Powershell has some interesting ideas about passing around objects rather than just text.

The unix way of text streams and tools to manipulate them are too ingrained that I haven't bothered to get in detail, but I hear some people are fans.


PowerShell from Microsoft is looking too alien on Unix. Just looking at the syntax I already feel uncomfortable.

It's hard for me to criticize it because I haven't used it, but it looks like it was designed by a committee and overdesigned. I.e. it wasn't created from the need, like someone at Microsoft wanted to automate his work and created PS to solve his problem. It looks like some boss decided: "They have shells, so we should have it also. But we will make it much cooler. So, let's gather a committee of 500 of our best managers and let's decide what features it should have".

It looks artificial and inconvenient to me unlike Unix shells. Maybe I'm wrong. But I'm pretty sure that even syntax of PS makes it harder to type the commands. Unix commands are very short (many are 2-4 letters long), all lowercase and do not contain characters which are hard to type (e.g. `-`).


PowerShell was started by a single person, Microsoft's Jeffrey Snover. As the project progressed other designers came on board but Snover remained (and remains) as Chief Architect. The design-by-committee allegation isn't being fair to him.

Part of why PowerShell may seem not quite Unix-like, is Snover didn't just look at Unix as an influence. Snover's professional background included experience with IBM OS/400's CL shell and OpenVMS's DCL, and his experience with both systems influenced PowerShell's design.


Once you start using PWSH, the design decisions are so obvious and make so much sense, that it is obvious Jeffrey Snover did have vast experience with variety of shells and languages.

The idea to tie it with .NET and COM+ on windows is the best a shell has ever done, though someone noted in another thread that it is an old idea from XEROX mainframes (not sure what the name was).

if you want to imagine this in terms of unix/linux - it would be something like having all libraries' APIs at disposal directly from a shell that passes structured objects. or python's REPL being more shell-usable or JVM having a shell that provides access to all classes in a click of ENTER.

major downside of pwsh is that you can feel it being slower than expected due to the way objects are passed around, but I really expect this will be solved at some point with future releases as PWSH as language is still being developed so some concepts and internal architecture decisions perhaps change a lot.

noshell is taking this idea to a fair level, but really, there is a reason to do some dev/devops work in PWSH because it will heavily impact all future shell development even if eventually superseded by something better.


If you want that same ability from bash, it's available [0]. However, it interacts with C FFI, so it's lower level.

[0] https://github.com/taviso/ctypes.sh


Sorry, but it's absolutely not the _same_ experience, but much harder and less integrated, Also let's not forget for a minute that bash tosses characters, and not objects.

There's a very long road for any shell to get where pwsh is within MS's ecosystem (and even within .NET Core/GNU ecosystem). And I can easily imagine .NET Core wrapping DBUS and other COM-like facilities available for GNU OSs.


BTW Jeffrey Snover said every PowerShell feature was supported by a business case. A principled approach, I thought, if mercenary.

But what's the "business case" for iteration, conditionals, addition - apart from "everyone needs these"? Though these probably weren't the features he was thinking of.


I don't think I understand the question.

You think iteration, conditionals, addition are not needed in shell scripts? It would be a very unusual position indeed.

Or could it be that you are not familiar with the expression "business case"? (Sorry if this is off base! but the word "mercenary"in your post also kind of points this way).

"Business case" has very little to do with business as commercial enterprise...


The only thing I can think of in PowerShell that definitely seems to be "design by committee" is the whole ScriptExecutionPolicy nonsense.


I’m sorry to say this but this is a very uninformed comment with way too many biases without any backing.

> I.e. it wasn't created from the need, like someone at Microsoft wanted to automate his work and created PS to solve his problem

Interestingly enough, it is used for automating Windows configuration.

Re last paragraph:

UNIX commands are random letters that are like that due to historical reasons — if you don’t know one exists, you can’t find it. And when you have to use {} in PS you are dealing with something you would do with awk sed with even more arcane syntax — PS is pretty readable with some knowledge on any PL. Also, they are Verb-Noun based so you can get around and find commands even offline (though Noun-Verb would have probably been a better decision). Due to the fixed syntax they can be shortened very well. Also, due to every “command” having the same conventions, arguments are properly done (no tar xf or -xf or —extract), they are reliably autocompletable without black magic, and they are also autocompletable by giving enough letters to make it unambiguous. So while I am not particularly fond of Microsoft, powershell is better than bash (though frankly, what isn’t?) in every conceivable way.


> So while I am not particularly fond of Microsoft, powershell is better than bash (though frankly, what isn’t?) in every conceivable way.

There are fair criticisms to be made of bash...but this isn’t a fair comment either; I use power shell a lot; I’ve written hundreds or thousands of lines of powershell glue for scripts and devops pipelines and all kinds of things.

It’s not very good.

It is one of those things that seems like it’s a great idea; everything is an object, don’t serialise between jobs, tab completion on related objects, verb-noun commands... it reads like a feature matrix.

The reality is less ideal.

It’s verbose, the commands are arcane, and the verb-noun thing breaks down into meaningless pairs in many cases, particularly in the azure commands, or worse sql server commands.

Maybe, to be fair, there is a core of elegance in there, but powershell displays a fundamental failure:

When an application doesn’t have to do the hard work of making a good cli, the resulting commands in powershell are actively user hostile.

It’s a pattern I’ve seen over and over again; and things like the az-cli are a tangible acknowledgment from Microsoft itself that the results have been substandard, and rejected, overwhelming, by the broader community in favour of arguably inferior tools (eg bash).

So... you might argue “network effect”, but I don’t buy it; powershell isn’t friendly; not to new developers, not to “clicks ops” developers, not to Unix developers.

It’s firmly a middle ground “no one wants to use it if they can avoid it” product.

Bash scripts, now there is literally a hell on earth... but bash itself? Works fine.

/shrug


> It’s firmly a middle ground “no one wants to use it if they can avoid it” product.

No way, I love PowerShell. It was my first choice for the bulk of a B2B Windows integration product. Part of it's a C# desktop app; the rest is PowerShell scripts that customers can edit to taste.

I reach for it all the time in projects. The pipeline is so clean to work with. Once you get used to how it handles collections, the cmdlets are very intuitive, and PowerShell Gallery has a large selection available for download.

Objects instead of strings is big by itself, of course. Then you get the .NET standard library right in the shell to interact with them. Great for parsing dates, numbers; a powerful regex engine; stream manipulation; pathing functions; the ability to write and execute C# in the shell; etc.

The cmdlets for data manipulation have gotten very good, too. The CSV cmdlets used to be unintuitive because they exported type data, but that's now off by default. `Import-Csv` and `Export-Csv` work with objects that you can easily manipulate with the set operations cmdlets. It feels very much like LINQ.

Same with `Invoke-RestMethod` (`irm`) when interacting with APIs. It deserializes JSON into objects automatically. You can then easily filter or transform the result.

There's a learning curve for sure, but once you get past that, it's a very good shell. I feel like it's one of the best things to come out of Microsoft.


I didn’t mean that PS is the ideal shell, it has plenty of warts (I did mention the Verb-Noun thing), and bash’s syntax does have a good subset — namely piping, redirects to a certain degree. But variable substitution, not having proper math support, everything is text are terrible and they do come up with the arguably rare, more complex commands. Also, having a convention that has quite a few edge cases where it doesn’t work good may still be better than not having any. And the application not creating good cli is not shell-specific at all —- the linux world has it better because a good cli is prioritized. Even though git is often touted as having a terrible interface.


This comment would make it into awesome-space, for me at least, if it had some examples of the issues you mention.


Yes, I haven't used PowerShell. And I agree that my comment is uninformed.

It's just that I feel uncomfortable even looking at the PS syntax. Maybe it's idiosyncratic. Maybe I've spent too many years in Unix shell.


If somebody tried to introduce the Unix shell syntax today, they would be laughed out of the room. Nobody would be able to take something so utterly nonsensical seriously.

The only reason anyone accepts it is that it has always been there. But if you actually look at it critically, it is hot garbage.


Idiomatic Powershell is definitely verbose, but you get used to it (and you can disregard if you want). The verbosity can be used to make it more human readable, if done right. I use Powershell on a daily basis - the biggest thing that people have noted is passing objects, instead of text. At first it was weird, but now I find it really helpful. It makes it very easy to tie components together, without having to slice and dice text.

Another great feature is named parameter support. It's so much easier to deal with parameters than with other languages I've used.


I've used it and I can relate to the comment. Maybe your assumptions are off in there, but I think the main point you were trying to make is that it's complicated. It's so complicated that you really need a repl to be able to work with it efficiently. It's a bit like how programming Java is simple if you do it in Idea or back then in eclipse, but in a pure editor it's impossible to remember all the boilerplate.


Can you write a bash one-liner that gets the free memory of your system? Is the awk blackmagic better than selecting an item of an object? Of course new things will be harder to write but it’s a people problem not something inherent with PS.


I find cut easier to use than remembering awk syntax. So I get where you're coming from, but it's just a tad disingenuous don't you think? Especially since `free` `man free` is definitely easier to remember than Get-Whateverfunctioncall is


Is it easier? Because I would expect man free to go to the C stdlib.h definition.

You’re just familiar with the former and not the latter currently.


Your assumption is wrong. Why would you double down on a bad argument? Don't you have a linux environment to type man free in before making that claim?

It's just an overall bad argument. I don't see how `Get-CimInstance -Class Win32_OperatingSystem` is in any way memorable

https://www.google.com/search?q=man+free


How is expecting man free to point to the stdlib function a bad argument? man malloc goes to the function so why shouldn't man free? How would anyone just know that the command free exists _without already being familiar with a linux environment_ ?


man free for me shows the BSD library functions manual, maybe it's different in Linux


It's the same on Linux, depending on what manpages you have installed. free(1) is the binary. free(3) and free(3p) are the C function.

If you have more than one installed, `man` might decide to show the first one or it may interactively ask you to pick one, depending on whether it's been configured to be "posixly correct" or not.


procps may not be installed by default on BSDs or VMS or whatever other *nix flavour the person has. But then again quite a few of those systems also don't use bash as default shell(or at least didn't use to), so when the person above asks about "writing a bash script to just do" it's safe to make some assumptions about the target audience.


is the writing part really what we should focus on in scripting though?

i personally would've put much more weight into the ease of reading and understanding the code.

personally, i think that PS just came too late, so most people (me included) are too used to the way the unix shell works/feels. It would have to be just straight up better in all regards to displace it, but its more like a different flavor entirely. It however a very interesting take on scripting, in my mind it was more comparable to python repl then bash though, but i haven't really used it much.


I think bash is only readable because pretty much everyone is already familiar with the syntax. Given someone with no experience, they would probably have more success understanding what a Powershell script is doing.


> personally, i think that PS just came too late, so most people (me included) are too used to the way the unix shell works/feels.

I'm getting confused here, Isn't that literally what I said?


Wait, are you seriously trying to tell me that bash is more readable than PowerShell? Excuse me, but what alternate timeline did you accidentally stumble in from?


I did not say that, no.

I said that a lot of people (me included) are more used to Unix shell scripting. If PS had been released 30yrs ago, things wouldve probably been different. People have however already become comfortable with Unix shells, so it needs significant improvements to convince them to switch the tool of choice. While PS does have improvements, they're not significant enough to relearn everything.

To me, Unix scripts are more readable though, this is simply because Ive been writing them for so many years now, while I barely touched any PS code.


I very much disagree. There's a reason this is funny:

https://xkcd.com/1168/


Sorry for calling you out on it, I often write opinion-pieces without much backing as well :)

I was initially very biased against PS as well, but then had to learn it and I found the design quite genius. At a time I even tried to use it as a daily driver on linux but my muscle memory is UNIXy, and there are a few edge cases that are harder in PS than in bash so I had to revert back. But we should strive to keep an open mind even about Microsoft technology :)


> They have shells, so we should have it also

The thought process was more like "You can either hire a dude to do it by mouse in MMC or you can get someone with a masters in CS to do it in DCOM. We need a middle ground."

Source: was in the room


Can you elaborate?


Not OP, but: MMC = Microsoft Management Console, the collection of "snap in" GUI config tools for all the advanced features beyond Control Panel.

MMC can control remote computers, but I believe only one at a time? Unless it's something through Group Policy.

Now, the APIs exist to do all this remotely (DCOM), but good luck discovering what they are! And the minimum level of program you'd need to call them would be a C# project.

So, Microsoft knew that UNIX systems had an API that was interactive, scriptable, discoverable, and composable, all the things which CMD and MMC and DCOM aren't. So they decided to build one. And make it object-orientated. It's actually pretty good for the administration use-case, but for more general work it feels weird. And it doesn't interact with text files anywhere near as good as shell does.


Some snapins can connect to multiple servers/computers at once, such as the DHCP MMC snapin. Others can't, like Event Viewer.

IMHO Microsoft went all-in on the object-oriented paradigm and tuned the operating system to work with C++ ABIs and expecting administrators to use tools that use Microsoft or vendor provided DLLs.


Everything you say is accurate, but it doesn't really relate to his claim for the reasoning behind the decisions behind Powershell. That's what I was referring to when I asked for a more detailed elaboration.


I worked in Windows Server on admin tools in the late 90s and 2000s, and watched the whole thing happen. I heard it summed up like that from Ballmer himself. I don't know what your alternative idea is - you think Microsoft was jealous of Linux and spent a hundred million dollars making a new shell just so they could feel proud?


I'm sorry, are you a shell user? Since when ~, for example, is easy to type and - is hard to type? LOL.

First of all your backstory is wrong, it was a 1-person project. Then, about the features, all those long, explicit commands have short versions.

Secondly, I agree that Powershell feels a bit alien on Unix, which will probably be the main reason its adoption will never be amazing.


For a long time the Windows equivalent of ~ was %USERPROFILE% which was definitely harder to type than ~. Eventually they introduced the shortcut on Windows too.


Same for me, but looking at nushell, I'm not ruling out that they'll end up with something very similar to it. (I haven't used Powershell much to be honest.)


powershell for unix

In general I really dislike "[some other product] for [a different use case]". It might be accurate but many potential users might never have heard of PowerShell or if they have only have a vague idea of what it is.


I find it hard to believe that someone looking into alternative shells would have never heard of PowerShell. I get that Windows isn’t exactly most people’s OS of choice, but the existence of PowerShell is widely known.


I know that fish shell exists but I have absolutely no idea what distinguishes it from bash or zsh, both of which I use regularly. The extent of my knowledge comes from seeing the name in a HN title.

It's not hard to imagine the same for PowerShell.


The philosophy part reminds me of `jq` with its functional style processing:

1. parse a json value from stdin and set it as the initial result

2. for each function, apply the function to the result, and set the output as the result for the next function.

3. The final result is pretty printed on stdout.

https://codefaster.substack.com/p/mastering-jq-part-1-59c


It used to have a motto that was something like "a shell for the github age", and I guess they never picked something more appropriate.


Kudos for writing something in rust and not appending "written in rust!" everywhere.


Why is everything written in rust nowadays? Apart from the safety it provides.


> Apart from the safety it provides.

Sum types (and pattern matching), first-class results, ownership, good performances and a rich ecosystem turn out to be quite nice for a general purpose langage once you’ve passed the hurdle of the borrow checker, even ignoring all the zero-cost abstraction stuff.

Also the really solid and safe multithreading. You might struggle a bit getting all your ducks in a row, but past that you’re reasonably safe from all sorts of annoying race conditions. And rayon is… good.

Not sufficient for GUI applications where the rust experience remains not great, but for CLI or even TUI?

I’ll misquote something i recently saw on Twitter because it’s very much my experience: “and once again I end up riir-ing a one-off tool I wrote in python and wonder why I didn’t just do that from the start”.


There must be something else, because most of what Rust brings to the table is what functional languages have been providing for ages, just with rebranded names.


Existing functional languages had their own issues:

1. Haskell: had to deal with cabal hell

2. Scala: Java toolchain, VM startup time, dependencies requiring differing Scala versions.

3. F#: .NET was considered too Microsofty to be taken seriously for cross platform apps.

4. OCaml: "That's that weird thing used by science people, right?" - Even though Rust took a decent amount of ideas from it, it got validated by its early users like Mozilla and Cloudflare, so people felt safer trying it.

5. Lisp: I don't think I need to retell the arguments around Lisp. Also a lot of the things Rust touts as FP-inspired benefits around type systems really aren't built into lisp, since it's dynamically typed, these come more from the Haskell/Scala school.


> OCaml

Also as much multithreading as Python (aka only for io).


There is truth to that, the "something else" is a different set of trade-offs for some other things that have usually been associated with FP languages.

Rust feels like the love-child of part of ocaml (for the sum types), part of C (very small runtime, ability to generate native code, interrop with C libs, etc..), part of npm (package manager integrated with tooling, large discoverable list of libraries), etc...

Borrow-checking seems a bit newer-ish - but I'm pretty sure there is an academic FP language that pionnered some of the research.

No-one is planning to give Rust the medal of best-ever-last-ever language any time soon.

And none of that is a "bad thing" (tm.)


In my case, I use it because it is dead simple to get a standalone, lean, fast, native executable (on top of the other functional programming features). Cargo is a huge part of what I love about rust.


I have a great example. We have 100s of Markdown files. I needed a link checker with some additional validations. Existing tools took 10-20 minutes to run.

I cooked up a Rust validator that uses the awesome pulldown-cmark, reqwest and rayon crates. Rayon let me do the CPU bits concurrently, and reqwest with streams made it dead simple to do the 1000s of HTTP requests with a decent level of concurrency. Indicatif gave me awesome console progress bars.

And the best part, the CPU bound part runs in 90ms instead of minutes, and the HTTP requests finish in around 40 seconds, primarily limited by how fast the remote servers are over a VPN halfway around the world.

No attempt made to optimise, .clone() and .to_owned() all over the place. Not a single crash or threading bug. And it will likely work one year from now too.


Reading your comment made me realize another thing: Using rust often feels like the language is a successful attempt to take the best parts of a lot of other languages and put them together into a single, rational collection of features. Most of what's in there isn't new, but it all fits together well in one place so I don't feel like I have to make a devil's bargain for important features when I start out.


Most good languages seem to boil down to this.


Are the checks somewhat time-stable? Couldn't some of the checking (and network requests) be avoided by caching? For example by assuming that anything OK'd withing the last hour is still OK.


> most of what Rust brings to the table is what functional languages have been providing for ages

In relatively familiar package & paradigm, with great package management (Haskell is the best of the functional langages there and it’s a mess), and with predictable and excellent performances.


That thing is runtime performance.


I think Cargo doubling as both build tool and package manager is a big factor here. The combination of Cargo + crates.io makes it very easy to write some Rust code and make it available to anyone with Cargo on their system. Either by `cargo install nu` to build it from sources on crates.io or `cargo install` inside the git repo to build my own customized version. No more waiting for distro packagers to keep up.

Putting this together makes for a nice environment to distribute native system tools. And in the last few years we've seen a wave of these tools becoming popular (ripgrep, bat, fzf, broot, exa and some others).


Thank you for listing these. I had a look at them and they are really useful utilities. Now the challenge is to try to change the muscle memory that relies on the traditional unix commands!


How are you measuring popularity?


Popularity in this context is my personal experience seeing these tools pop up over and over again in the media that I consume.


This is insanity and hubris.

"It's so easy!" yes if you have the language and tools de jour installed and up to date. I want none of that.

It was node and npm.

Then go.

Now Rust and cargo.

Oh, I forgot ruby.

And all this needs to be up to date or things break. (And if you do update them then things you are actively using will break.)

I don't need more tamagochis, in fact the less I have, the better.

What happened to .deb and .rpm files? Especially since these days you can have github actions or a gitlab pipeline packaging for you. I can't care less what language you are using, don't try to force it down my throat.


Many of the popular rust cli tools like ripgrep, exa, delta, etc -do- have package manager install options.

How dare people writing cli tools not package them conveniently for my distro. The horror of using cargo instead of cloning the source and praying make/meson/etc works.

Feel free to package and maintain these tools yourself for your distro if you want.


I don't know about you, but in my experience, getting Cargo to work has been a much bigger pain than make/meson et al.


I've never had any issues with cargo. I use rustup to manage my rust toolchains and cargo for the most part.


> What happened to .deb and .rpm files?

The problem with those is they require global consistency. If one package needs libfoo-1.1 (or at least claims to), but something else needs libfoo-1.2+, we can't install both packages. It doesn't take long (e.g. 6 months to a year) before distro updates break one-off packages.

I think some people try hacking around this by installing multiple operating systems in a pile containers, but that sounds awful.

My preferred solution these days is Nix, which I think of as a glorified alternative/wrapper for Make: it doesn't care about language, "packages" can usually be defined using normal bash commands, and it doesn't require global consistency (different versions of things can exist side by side, seen only by packages which depend on them).


I'm the parent that you replied to. In my eyes there is nothing wrong with .deb and .rpm files. In fact, many of these tools are available for download in these formats and some others (docker, snap, etc). And it is good that they do but it comes with extra work to setup the pipelines/builds.

The concept of a language-specific package manager distributing not only libraries but also executables isn't new. Go get, ruby bundler, python pip, cargo and npm all have this feature.

I was originally answering a question about why we suddenly see all these "written in Rust" tools pop up. I think that is partly because Cargo provides this easier way to distribute native code to users on various platforms, without jumping through additional hoops like building a .deb, and setting up an apt repository.

Sometimes you just want to get some code out there into the world, and if the language ecosystem you are in provides easy publishing tools, why not use them for the first releases? And if later your tool evolves and becomes popular, the additional packaging for wider distribution can be added.


Ease of use and familiarity are different things. Tooling around rust really is easy, when the alternatives (for equivalent languages) are CMake, autotools, and the like.

As it stands, I can brew install ripgrep and it just works. I don’t need to know it’s written it rust. If, for some reason, homebrew (or whatever other package manager) is lagging behind and I need a new release now, cargo install is a much easier alternative compared to, again, other tools built in equivalent languages


Indeed. Thank for you stating this so clearly.

The "ease of use" and "familiarity" distinction reminds me of talks by people such as Rich Hickey who distinguish "simple" and "easy":

https://www.infoq.com/presentations/Simple-Made-Easy/

> Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.


The problem with .deb and .rpm is your dependencies, some things aren't packaged, you end up having to build separate packages for each major Debian and redhat release to link against the correct dependency version.

I'd love that to all be "one-command automated", but I haven't seen such a thing, unlike cargo, which I do find I can be productive with after a one page tutorial.


100% agree. I find it very funny, but in a sarcastic and totally wrong way, when a project's README has an Install section that reads:

  Run "cargo install myProject"
I know Rust, so Cargo is not alien to me. But come on, you know that your install instructions are a bit shitty.

Please, choose a target distro, then test your instructions in a clean Docker container. THEN you can sit down knowing you wrote proper guidance for users.

EDIT because this comment is being misunderstood: I meant that you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so; using a Docker container is just my preferred method, but you can also do a clean VM or whatever else, as long as you don't assume anything beyond a default installed system.


Hold on, do you not see the insane contradiction of not wanting to rely on having cargo installed but requiring something is deployable and tested in a docker container? What?!


No, you misunderstood. I meant that if you're going to document a block of command-line instructions, you should first make sure those commands work as-is in a clean system.

A very easy way to do this (for me anyways) is using a Docker container. I use this method to test all of my documented commands. But there are other ways, of course, like using a clean VM. Regardless, just test the commands without assuming the customized state of your own workstation!

The point is that if I follow the hypothetical instructions of running "cargo install something", the result will probably be "cargo: command not found". When you test this in a clean system and find this error message, this places on you the burden of depending on Cargo, so the least you should do is to make sure "cargo" will work for the user who is reading your docs. At a minimum, you should link to somewhere that explains how to install Cargo.

tldr: you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so.


You're telling me that people who want to replace a command-line utility are the same people who can't install a toolchain (or just download a binary and put it in their path)?


As a single-sample statistic I can share with you, I like to think I'm a well seasoned C/C++ developer, and have experience with all sorts of relatively low-level technical stuff and a good grasp on the internals of how things (like e.g. the kernel) work.

Yet I got confused the first time ever some README told me to run "npm install blah". WTF is NPM? I didn't care, really, I just wanted to use blah. Conversely, later I worked with Node devs who would not know where to even start if I asked them to install a C++ toolchain.

The point is don't assume too much about the background of the people reading your instructions. They don't have in their heads the same stuff you take for granted.


There was a time that I didn't know what npm is (I'm not even remotely a web developer). So I used my computer to do some basic research.


Don't focus on the specifics, consider the NPM thing an analogy for any other piece of software.

I've found instances where some documentation instructions pointed to run Maven, and the commands worked in their machine because Maven is highly dependent on customizations and local cache. But it failed in other machines that didn't have this parameter configured, or that package version cached locally. And trust me, Maven can be _very_ obtuse and hard to troubleshoot, too much implicit magic happening.

Testing in a clean container or VM would have raised those issues before the readme was written and published. Hence my point stands, testing commands in a clean system is akin to testing a web page in a private tab, to prevent any previous local state polluting the test.


Testing in a clean container tests deploying in a clean container. For me, I run a computer :) Maven sounds like a nightmare tbh so I can understand that that specific piece of software has warts. That said, a good piece of package management software will be relatively agnostic to where its run and have a dependable set of behaviours. I much prefer that to a world where every bit of software is run on any conceivable combination of OS and hardware. What an absolute drain on brain space and creative effort!


As someone who authors another shells (coincidentally similar to nushell) I can tell you that you'd be surprised at some of the bug reports you get.

Frankly I prefer the 10,000 approach suggested by XKCD: https://xkcd.com/1053/


If its deployable and tested in a docker container its much easier to generate user images, it takes the onus away from the user and the developer can just put it on the aur/publish a deb


You happen to have cmake or autotools installed, others happen to have cargo installed.

Once cargo/cmake/autotools/make/npm/mvn/setup.py/whatever runs, the process of taking the output and packaging it for your preferred distro is the same.

There's more work involved if you want a distro to actually pick it up and include it in their repos around not using static linking, but if you're asking for a .deb/.rpm on github actions, that's not needed.


Why don't you download the native binaries then?

Rust isn't an interpreted language, you only need the rust toolchain if you want to build from source.


Binary releases seem uncommon from my perspective. Every time I go to install a piece of software written in Rust from homebrew, it invariably starts installing some massive Rust toolchain as a dependency, at which point I give up and move on. Maybe it's a case of the packagers taking a lazy route or something, or maybe there is a reason for depending on cargo. I have no idea.


Isn't homebrew specifically supposed to build from source? e.g. the example on the homepage of a recipe is running ./configure && make on wget.

The fact that you installed the XCode CLI tools for that wget example to work when you first installed homebrew because homebrew itself requires it, and you only get Cargo the first time you get a rust dependency seems to be what you're really complaining about.


Homebrew tries to install binaries by default. (They call them bottles) Building from source happens if a suitable 'bottle' isn't available, or when `--build-from-source` is specified with the install command.

I know cargo is installed only once, but I don't want cargo. I don't build Rust software myself, so I don't want to have it hanging out on my system taking up space purely just so I can have one or two useful programs that were written in Rust and depend on it. I'll just go with some other alternative.


Do you have some specific examples?

E.g. ripgrep is packaged on most operating systems I have used, along with exa, and a few other Rust utils I use.

I certainly do not use Cargo to install them.


Perhaps the packagers on your platform went that extra mile to build binary packages. Taking a quick look, the Homebrew formula[0] for ripgrep on macOS just lists a dependency on Cargo (rust) and then seems to invoke the cargo command for installation. I'm not well versed in Ruby though, so my interpretation could be wrong.

I don't want to come off as entitled, either. I know the Homebrew folks are doing a ton of brilliant, ongoing work to make it work as well as it does, so I can't really blame them for potentially taking a shortcut here.

[0] https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/...


If it installs a bottle, then does it still require installing Rust? If so, then maybe that's a shortcoming of Homebrew.

Either way, it kinda seems like you're complaining about Homebrew here. Not Rust.

If having Cargo/Rust on your system is really a Hard No (...why?), then I guess find a package manager that only delivers whatever is necessary, or, if available, use the project's binary releases: https://github.com/BurntSushi/ripgrep/releases/tag/13.0.0

And actually, in the case of ripgrep, it provides a Homebrew tap that specifically uses the GitHub release binary: https://github.com/BurntSushi/ripgrep/blob/master/pkg/brew/r...


Ruby requires an interpreter at runtime. JavaScript too. Rust produces standalone binaries. So no, "things don't break" and you only compile things once.

// I can't care less about deb or rpm files so don't try to force that down my throat.


There's no win/win scenario when comparing libraries to static binaries. On the one hand, static binaries are more user friendly. But they remove the responsibility for keeping your OS secure away from the OS/distro maintainers.

For example, if a vulnerability is found in a create, you then have to hope that every maintainer who manages a Rust project that imports said create diligently pushes out newer binaries quickly. You then have multiple applications that need to be updated rather than one library.

This may well be a future problem we'll end up reading more about as Rust, Go and others become more embedded in our base Linux / macOS / etc install.


It is Gentoo all over again.


I agree that it's not ideal, but unfortunately bad decisions by Linux distributions and package maintainers have trained me as a user to avoid the package managers if I want up to date software with predictable and documented defaults.


Good package manager, broad ecosystem of packages, no header files, helpful compiler messages. It offers a good alternative to C++ for native applications.


Not to mention excellent support for algebraic datatypes and pattern matching.


So much this. After having encountered ADTs for the first time in Haskell and later on in Rust and other languages, any language without sum types (like Rust's enums) feels wholly incomplete. And the destructuring pattern matching is the cherry on top.


ADT and pattern matching really does it. Older languages certainly have their pull and can do a lot but with many modern coders having an interest or even education in higher level mathematics, easily being able to do that will put you way ahead of the competition. Even the hyper-pure Haskell now runs its own ecosystem and has actual systems written in it.


It's a fast compiled language with a great type system and a sane build system. There aren't a lot of alternatives with those properties.

I'd say the closest are Go, which doesn't have remotely as good a type system, Typescript which isn't compiled and isn't quite as fast or nice.


Most of the answers here omit this, and it's a very important point for hobby open source projects - Rust is just very fun to write.


Backwards compatibility can be a heavy burden for a programming language. C++ could be a much simpler, ergonomic language by eliminating features and idioms that are no longer convenient to use.

Achieving mastery in C++ requires a lot of work. C++ projects require a coding standard. The decision space you have when working in C++ is much larger than when working with Rust due to the proliferation of language idioms.

Rust in the other hand, as a newer language, can benefit from the experiences working with languages such as C++, and provide a better experience right from the beginning. In fact, Rust was created by Mozilla as a response to the shortcomings they perceived in C++.


I just hope that Rust with 40 years of backwards compatibility feels better than C++ today.


40 years is a long time, so the experience will almost certainly degrade a fair bit. The notion of editions in rust makes allowances for breaking changes while still keeping backwards compatibility, I'm very curious to see whether the problems that solves outweigh the complexity in the compiler.


I am not convinced that editions are much better than language version switches.

They only work in the ideal case that whole dependencies are available as source code, the same compiler is used for the whole compilation process and for light syntactic changes.

In fact for IntoIterator, a small fix to the editions was made, https://blog.rust-lang.org/2021/05/11/edition-2021.html

With 40 years of history, several Rust compilers in production, corporations shipping binary crates, expect the editions to be as effective as ISO/ECMA language editions.


Two out of three of those things have nothing to do with editions. The final one is basically saying “you can’t make huge changes,” and I’m not sure how that’s a criticism of the possibility of “in 40 years there will be too many changes.”


> Therefore, the answer to the question of, say, “what is a valid C++14 program?” changes over time, up until the publication of C++17, and so forth. In practice, the situation is a bit more complicated when compiler vendors offer conformance modes for specific language revisions (e.g. -std=c++11, -std=c++17). Vendors may consider defect resolutions to apply to any historic revision that contains the defect (whereas ISO considers only the most recent publication as the Standard).

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p213...

Feel free to think I am only doing FUD, and coming from C++ side I am clueless about how editions will fare in the future.


The design of editions is such to specifically reduce implementation complexity in the compiler, for this reason. The majority of the compiler is edition-agnostic.

It’s not the only reason, but it’s a big one.


Could you expand upon this?

My specific concern there is that, while the compiler frontend for any one given edition becomes individually simpler, the whole compiler becomes a bit more complicated, and things like IntoIterator, which pjmlp mentioned elsewhere, imply changes across several editions.

This is not a major problem when editions means {2015, 2018, 2021}, but in a world where we also have 2024, 2027, ... editions, this becomes increasingly tricky.


The Rust compiler is roughly “parse -> AST -> HIR -> MIR -> LLVM IR -> binary.” I forget exactly where editions are erased (and I’m on my phone so it’s hard to check), but for sure it’s gone by the time MIR exists, which is where things like the borrow checker operates. Edition based changes only affect the very front end of the compiler, basically. This is a necessary requirement of how editions work. For example, it is part of the interoperability story; because the main representation is edition agnostic, interop between crates in different editions is not an issue.

… I don’t know how to say this truly politely, but let’s just say I’ve had a few conversations with pjmlp about editions, and I would take the things he says on this topic with a large grain of salt.


When Rust editions reach about 5 in the wild, feel free to prove me wrong by mixing binary crates compiled with two Rust compilers, mixing three editions into the same executable.

You can also unpolitelly tell me how it will be any different from /std=language for any practical purposes.


Again, the ABI issue has nothing to do with editions. You can already build a binary today with three editions (though I forget if 2021 has any actual changes implemented yet) in the same executable. Part of the reason I said what I said is that every time we have this conversation you say you want to see how it plays out in practice, and we have shown you how multi-edition projects work, and how you can try it today, and you keep not listening. It’s FUD at this point.

It is different because those are frozen, editions are not (though in practice editions other than 2015 will rarely change). They make no guarantees about interop, and my understanding is that it might work, but isn’t guaranteed. If you have sources to the contrary I’d love to see them!


Which is basically the same thing as /std=language, when applied to different translation units.


There is the similarity that the editions don't really matter for ABI, but otherwise editions are substantially different from the std switch.

C/C++ std switches freeze the entire language and disable newer features. Editions don't. Rust 2015 edition isn't an old version of Rust. It's the latest version of Rust, except it allows `async` as an identifier.

Editions don't really have an equivalent in C, but they're closer to being like trigraphs than the std compilation switch.


That only works because the Editions get updated after being released.

The same can happen to ISO C and C++, that is what technical revision documents are for.

> Therefore, the answer to the question of, say, “what is a valid C++14 program?” changes over time, up until the publication of C++17, and so forth. In practice, the situation is a bit more complicated when compiler vendors offer conformance modes for specific language revisions (e.g. -std=c++11, -std=c++17). Vendors may consider defect resolutions to apply to any historic revision that contains the defect (whereas ISO considers only the most recent publication as the Standard).

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p213...

In both cases, it is expected that compilers update their understanding what a specific language revision means.


"What have the Romans ever done for us?"

https://youtu.be/djZkTnJnLR0?t=56


Seems like low level programmers finally getting an enjoyable, modern, free language and are getting a bit loose with it!


Also high level programmers getting something more performant and that doesn't require a runtime but still has a lot of ergonomics they're used to.


It’s the new C/C++ but also you can write Rust like you’re writing Golang.


None of that sounds appealing on its own.

Ready for the downvote wave. :D


This is a thread about a shell. That’s very much the sort of project that is typically written in C, and also the sort of project that really benefits from being written in something safer.

It’s fine if writing systems languages doesn’t appeal to you, but they fulfil an important niche in. V8, Hotspot, Cpython all have to be written in something.


You assumed a lot about me from one sentence. This is why people dislike the rust evangelism.


I write a lot of personal small utilities in Rust these days: The tooling is more modern than C++, it's more consistent cross platform, and it doesn't suffer the VM startup time of Python, which I would have used previously.


Might just be the Baader-Meinhof effect:

https://en.wikipedia.org/wiki/Frequency_illusion


That is like a big deal. Also speed.


>That is like a big deal

Why you think it is "a big deal"?


"Security bugs" are after all just a specific class of bugs and are still a huge nuisance in non-critical applications as a crash one could leverage for circumventing some security boundaries means most often just a unexplained crash for the common user, which just wants to use a tool in good faith.

So, reducing security bugs means less crashes on weird input, less leaks (better resource usage), just a more stable tool as some classes of bugs (which may or may not be security relevant) are just eliminated completely - that's a big deal as with rust the (runtime-)cost other languages avoiding this is just not there (or much smaller).


Why do you not? No memory leaks, no security issues stemming from such. No random crashes from memory misuse, no issues debugging the issues that exist. It's like a higher level language but lower. You get to do things your way, except when your way is fundamentally unsafe. The code is then easier to debug as it actually describes what's happening and issues are mostly algorithmic, while the application gets a massive boost in reliability and security.

Security is like the issue now, along with reliability. That's what people need and want. Rust offers that.


Rust is perfectly happy to leak memory. Leaks are not considered unsafe. There was actually a bit of a reckoning around the 1.0 release where people widely assumed the language wouldn’t leak, and leaks were proven to be safe behaviour.


Oh? Perhaps I need to reconsider my past trust in Rust. In retrospect it makes sense, interop. without leaking memory would be damn near impossible.

Still, I expect it to be very hard to do accidentally. In C all you need to do is have your mind blank for a moment. Which isn't that uncommon, especially if you're on crunch or something.


So there's two things to talk about here.

First, the language can't save you from getting the program semantics wrong (e.g. if you never delete an entry from a hashmap even after you're done with it, you're leaking that memory). No language can save you from leaks as a general concept.

Second, Rust makes a very specific promise — freedom from data races. Leaking resources does not actually break that promise, because it doesn't allow you to access that resource any more.


Unintentional leaks are rare in Rust, the main issue is around reference counting loops not being cleaned up automatically. Future versions of Rust might even offer some support for unleakable 'relevant types' (the dual to the existing 'affine types' where leaks are not considered incorrect) for better support of very advanced, type-directed resource/state management.


Rust isn't the only language that offers that. In fact most languages offer that. Even Pascal is safer than C. Or if we're really concerned about security then we should be advocating that out shells are written in Ada. But clearly there's more to it than that....

It's also worth remembering that the biggest causes of RCE in shells haven't been buffer overflows. It's been fundamental design problems from the outset (eg Bash executing functions in environmental variables (Shellshock) and CI/CD pipelines or other programs forking out to the shell without sanitising user input nor having sufficient RBAC in place).

Don't get me wrong, I have nothing against Rust. I think its fantastic that we're seeing diversity in the ecosystem. But we need to be careful not to judge a project simply because of its use of Rust (or choice of other language). The reality is soooo much more complicated and nuanced than is often made out in HN.


Maybe we could be writing things in ADA. I don’t know, it’s a language that’s been on my radar for several years but I haven’t actually dug into it yet.

That said, we need something to replace C — and Rust seems to be picking up momentum. Rust seems good enough a replacement to me, and that’s enough for me to cheer it on.

I do agree that “written in rust” isn’t as big a guarantee of quality as people here assume though.


We've already had languages that could replace C. Ironically Pascal was replaced by C on home systems. But Rust isn't a C replacement, it's a C++ replacement.

HN talks about Rust like there was a void before it but there wasn't. I think it's great that the community have finally gotten behind a safer language and I think Rust is a worthy candidate for the community to get behind. But I'm sick of reading about Rust as if it's a silver bullet. HN badly needs to get past this mindset that Rust is the only safe language (it is not), and that programs are automatically safer for being programmed in Rust (in some cases that might be true but in most cases it is not).

I remember learning to program back in the days when people would mock developers for using structured control flow blocks because "GOTOs are good enough". While the Rust movement is, thankfully, the inverse of that in that people are questioning whether older, arcane, paradigms need to be disrupted, there is still a weird following surrounding Rust that has the same emotive worship without actually looking at the problems being discussed. People seriously suggesting everything should be written in Rust or harping on about the language as if its a one of a kind. There's plenty of domains that better suit other, safe, languages and there are plenty of developers who personally prefer using other, also safe, languages. Rust isn't the right tool for everything.

And the fact that I've seen people advocate Rust ports of programs written in Haskell, OCaml and Go because "it's safer now it's rewritten in Rust" is a great demonstration for how absurd the cargo culting has become.

My point isn't that Rust is a bad language or that people shouldn't be using it. Just that people need to calm down a little when discussing Rust. Take this case for instance: most shells out there these days are written in safe languages. The last dozen or so shells I've seen posted on HN been programmed in Python, LISP, Go, C# and Scala. It's really only the old boys like Bash and Zsh that are C++ applications. So Nushell isn't all that unique in that regard. But because its the only one out of a dozen that was written in Rust, it's the only shell what has a page of comments commending the authors for their choice of language. That's a little absurd don't you think?


> it's the only shell what has a page of comments commending the authors for their choice of language.

There isn't "a page of comments commending the authors" here, so I have no clue what you are talking about? The main Rust discussion is in a subthread which someone specifically started by asking "why Rust", at which point you can't really fault the Rust fans for explaining why Rust.


Fair point.


Rust is far from "one of a kind". There's a similar-ish project for C at https://ziglang.org/, and to be honest, there have been 20 such projects in the past, 6000 if you count all the total failures, I just like this one.


That's my point :)


hype language du jour

In the past it has been Lisp, Python, Haskell, Go, etc.


Downvoting me doesn't make it less true.


Thanks for letting us know :)


It's the new "I use Arch btw". Genuinely kind of tired of it already.


Meh, not really. Something being written in Rust is a feature to me.


This feels extremely like PowerShell, from all the examples, and even cites "draws inspiration from projects like PowerShell", so "a new type of shell" feels very disingenuous unless it has additional novel properties that aren't evident just from these.

There's also no support for variables yet if the page is to be believed, which means there's 0 chance of me using this for more than 5 minutes right now.

(I'm not trying to say this project isn't worthwhile, just that many other people in the thread currently seem to be entirely uncritical.)


> This feels extremely like PowerShell, from all the examples, and even cites "draws inspiration from projects like PowerShell", so "a new type of shell" feels very disingenuous unless it has additional novel properties that aren't evident just from these.

There's HN-style nitpicking, and then there's this.


I'm sorry, but I'm not actually sure what you're trying to express here.

My initial interpretation is that this is a level beyond typical nitpicking, but I'm not sure how that follows?


"A new type of shell" is a bit of marketing text to express the fact that this shell doesn't follow in the footsteps of most other shells (bash, zsh, fish, etc) in terms of treating things as blobs of text (which allows for a number of innovations). The fact that the author even states that it's partially inspired by Powershell demonstrates that he's aware of the similarities (and has in fact cribbed many ideas from it).

Calling his marketing line "disingenuous" comes across as very petty nitpicking that adds nothing of value to the discussion.


Ah, I see. Thank you for explaining.

The part that bothered me was not that they described something that shared some concepts with PowerShell as "a new kind", but that I did not see any illustration of entirely novel features in the examples that followed. As a sibling comment remarked, they feel I'm quite mistaken, and there are very positive distinctions illustrated in the examples, but I took a look at them again, and it's still not clear to me what they mean.

I think I don't see why it's unreasonable for something which advertises cross-platform support to be compared to common shells on all those platforms, not just originally-*ix ones? That is, I think it's totally reasonable for you to think the tagline is justified solely by the differences in those shells, even if I disagree, but I don't see why you think it's unreasonable for me to hold this view?


[flagged]


I appreciate you attempting to explain.


FWIW, I don't think "it's marketing" is a valid excuse for them saying things that aren't true. It's not a "new kind of shell" because it's mimicking things that already exist. The author even knows about those things and cites them.

I do not accept the idea that it's okay to make false claims to create a narrative, marketing or otherwise.

And I'm not on the spectrum.


https://github.com/nushell/nushell#progress

"Nu supports variables and environment variables"


That page was edited an hour ago, so that was presumably a stale example. I'm glad to be wrong!


I came here to say something similar. The question I need answered in the README is - why would I invest time in learning this instead of Powershell?

My muscle memory for bash is pretty strong, but when I have to do Windowsy stuff, I end up picking up a little bit more Powershell each time and find it pretty neat.

So I'd be interested to know what it does differently, and couldn't find that answer.


Have you used PowerShell much? The examples in the Readme make it obvious that it's very different (in a good way).


I use PowerShell extensively, and there are no significant differences in the README examples. They're basically rebranded PowerShell statements.

I see minor syntax differences, for example in the comparison operators. Nothing that would it would be worth losing the .NET BCL or PowerShell's cmdlets for.


I use PowerShell daily, though I do not, for example, develop nontrivial modules for it.

Could you provide an example of what I'm overlooking?


I think to most people who've only used as Powershell as "That windows shell with the blue background and built in unixy aliases", getting into Powershell itself still qualifies as a "new type of shell", so I'm willing to extend that credit to other shells trying the same paradigm.


I'd agree that getting into PS from cmd counts as a new paradigm, or that getting into Nushell without Powershell experience is as well (based on the examples), but at the point where PS has been shipped with Windows by default for many years now, and Nushell runs natively on Windows, I think it's reasonable to disagree with a claim of "a new kind of shell".

I also think it's reasonable for people to say it's justified - there's certainly, as you and others have remarked, a decent argument to be made for that.

But for me, "different from many common shells, but all the same functionality has been bundled in one shell before"* strongly violates my expectations for "new kind of shell".

* - I am not trying to definitively state they have no new functionality, I absolutely have not dug in deep enough, just that I did not see any examples of it, which I would have expected prominently.


I took it to mean that it was a new type of shell from the perspective of unix/osx users who were presumably their target audience, and who are an audience who might either not know what PowerShell is about our discount it is just a "windows thing" and therefore not relevant to their life. So "new" in the sense of "new to you".

You could probably point to just about any technology that claims to be new and pull it apart and find that it is mostly derivative of existing technology and ideas


Sure, all work is theft, for some value of theft, but as I've said in a couple of other replies:

* "all these ideas have been done before, in various places" is one thing, "all these ideas have been done before in one program in the same role on all the same platforms" makes me feel like "new [to users who haven't used this other thing shipped with the OS for years on one of the platforms]" is insufficient to use without more explicit qualifiers

* I may be oblivious and missing some cool example, but the flip side to "all ideas implemented before" is "no new tricks", and when someone describes something to me as "a new type of X", I really expect at least one novel thing or composition of things to be present.

After all, I wouldn't describe TIFF as a "new type of image format" just because many people who haven't touched photo/graphics editing probably haven't encountered it, or IE4 as "a new type of web browser" (now) just because a significant fraction of internet users are not old enough to have used it when it had market share. (When it was first released, Active Desktop, for example, while a security nightmare, was indeed a new thing for almost all users.)


The philosophy section is the most important aspect, in my opinion, because I want to know why I should switch. In other words, show me a better future, and then I'm more likely to try it out.

> Philosophy

> Nu draws inspiration from projects like PowerShell, functional programming languages, and modern CLI tools. Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure. For example, when you list the contents of a directory, what you get back is a table of rows, where each row represents an item in that directory. These values can be piped through a series of steps, in a series of commands called a 'pipeline'.


First 60 second impression - this looks like powershell with not-insane syntax. Looks quite revolutionary, it breaks the traditional paradigm for UNIX shells and appears to artfully refine the good concepts in Powershell (which seem to be let down severely - in my opinion - by UX issues.)


What's wrong with the powershell syntax? The long names? Those are optional. I find the idea of short names (for interactive) AND the option of long names very good. Many of these aliases are similar to unix commands (e.g. "cat" instead of Get-Content or "ls" instead of Get-Childitem).


On top of the nice approach to structured output with pipelines, the "cross-OS first" approach is amazing! Very cool to use a modern shell that can fit both Windows and Unix by design (and not through WSL), hell I'm going to try to introduce it to my small company as a standard tool. Hope it's not in a too immature stage, however it looks very shining and promising considering the enthusiast contributors community. I'm in


Funny enough, the new PowerShell fits this description too now that it’s packaged for Linux.


What if we added json output mode to all shell commands? Or at least make some wrappers that parse their unstructured text output into json.

We would be able to use standard pipelines and jq to filter/query outputs, without any custom shells.

Just imagine:

    ifconfig --json | jq '.[] | [.interface, .inet] | @tsv'


You're not the first person to think of that. See Juniper's libxo: https://github.com/Juniper/libxo

It's integrated into a lot of FreeBSD's command line tooling, and is very useful, when it's available.


So a lot of FreeBSD command line tools support JSON output? That's cool! And I wish it was linux...


I am with you in principle - but I still don't get the jq path/filter syntax I'm afraid. I think I'd vastly prefer map/reduce/zip commandlets...

Ed: I'd say from a quick glance that I think I rather prefer nushell to powershell


There is some movement in this direction: e.g. it's possible to do `ip -j addr` today.


Very promising project that I hope it will take off and succeed.

I recently switched to zsh with the addition of oh-my-zsh and I am happy how it has features like auto completion and command validation of some sort, but this could take it to the next level.

I am just afraid to change to it and be disappointed about some incompatibility issues or bugs / crashes.

I will observe and wait until it's quite popular to hop on I think.


Try fish shell. It has auto completion by default and is easy to configure with web interface "fish_config", no need to rely on some community repo like oh-my-zsh. There can be bugs if you set it as default shell but i just append "fish" at the end of my .bashrc :)


It's so interesting - it seems that fish is getting more and more popular lately.

I tried it ~7 years ago and thought it was severely lacking, so I just stuck with zsh and have never really had a reason to look back.

Maybe I'll check out fish again at some point - what features does it have that drew you to it?


Almost everything that you might get from oh-my-zsh is built in and it's _much_ faster. I've been using Fish for a few years now and I love it.


I don't really use `oh-my-zsh`, I just built my own prompt. It's pretty snappy, so I don't feel a huge need to switch, but maybe on a slow day I'll install fish to give it a whirl.


not OP but the two features of fish that stood out for me so much that I make it my default shell on remote servers are

1) the autocomplete suggestion as you type a command [1]

2) scrolling thru commands after partially writing one only shows entries that match the written text

3) knowing if a command will work before pressing enter -- saves from a lot of gotchas.

all of these read like features nice to have but not essential, but when you're using something every day, it's worth it :)

[1]: https://fishshell.com/docs/current/tutorial.html#autosuggest...


I moved from zsh + oh-my-zsh to fish a couple of years ago.

The main reasons were:

1. It had a lot of qol features I liked from zsh _by default_ without requiring a significant config

2. Prebuilt zsh configs like oh-my-zsh have been pretty commonly quite slow in my experience, which fish fixes.


I use Oh-my-zsh too but I hate that it constantly asks me to update itself and it isn't exactly the quickest update either.

Either update automatically in the background or don't tell me about updates at all. Don't constantly nag me when I start new shells!


I used to use oh-my-zsh and never really understood it. Turns out what I mostly wanted was `zsh-autosuggestions` and `zsh-syntax-highlighting` plugins, plus some sane history settings [0]. I've been oh-my-zsh-free for three months, my computers are now less cluttered and more straightforward.

[0]: https://github.com/tasuki/dotrc/commit/e3769134e758d02a947ef...


I also uninstalled it pretty quickly after having tried it. Additionally, I use powerline, fzf, and easy-motion in zsh though.


I had the same issue but your comment made me look up if there is something to do about it. Both things are possible https://github.com/ohmyzsh/ohmyzsh#getting-updates


I started using zsh with oh-my-zsh as it made me realize how powerful, ergonomic and productive my shell could be. I was annoyed by the slowness. I thought of moving to barebones zsh with the few plugins. My colleague suggested that I used prezto. I have been using prezto ever since with my preferred plugins enabled. And, I have been happy with my setup. It has all the features I need without the slowness of oh-my-zsh. I ended up getting all the features of fish shell using plugins. I didn't want to use fish because it's syntax is not compatible with bash which I have to write at work.

Give prezto a try. You might like it.


I wonder if this could be accomplished in a more general way with a fourth standard file descriptor for structured data. stdjson basically.


I see this as requiring considerable work. For JSON, I don't see the payoff (relative to more expressive type systems). JSON was designed to work within different constraints. All in all, this suggests moving beyond JSON.

Sometimes the new ways are best...

I often find that redesigns / rewrites can be transformative. The same benefits less often accrue to incremental changes.

Practically, in my experience, when a group (or organization) is not under a severe time and budget crunch, redesigns seem more palatable. The results are more likely to be simpler, even if they are not as familiar.


What's wrong with plain old stdout? Lots of people do this with JSON (jq, etc.) and CSV or TSV (csvkit, etc.), XML/HTML, and more.

That's how I generate much of the https://www.oilshell.org/ site. It's just a tree of files and various shell commands, some of which I wrote myself.

I do think we need something better than CSV and TSV for tabular data though. They both have serious flaws (not being able to represent values with tabs, etc.!)


I had this thought half a year ago. Anything not to have to parse text make me happy. Though i guess learning to parse command output helps you learn how to parse other text down the line.


A pipe only conveys stdout. You could redirect 4>&1 1>/dev/null, but you'd have to rewrite all tools to generate both stdout and stdjson anyway.


> You could redirect 4>&1 1>/dev/null

For anyone confused by this, 1 is the output to stdout, and 4 is being redirected to where 1 goes, which is stdout. Unrelated to that, 1 (what’s actually being output to stout by the application) is being redirected to /dev/null.

The order of operations matters. If 1 was redirected to /dev/null first, then 4 would also end up in /dev/null. As it stands now, that doesn’t happen.


So you are supposed to read it from right to left? "First take 1 and throw it away. Next put 4 in 1." Is that how it works?


No, it's from left to right. It's basically just syntax for a series of dup2() calls.

"4>&1 1>/dev/null" means:

streams[4] = streams[1]; streams[1] = open("/dev/null");


Not quite. Left to right: copy the descriptor 1 onto descriptor 4, change descriptor 1 to result of open("/dev/null").

The >& is for copying the descriptor not for aliasing ("hardlinking") it.

The descriptor is the "fd" argument, e.g.:

write(int fd, const void *buf, size_t count)


No, it’s like assigning variables. The second part overwrites the value of 1, but the first part is still using the old value.


Good one. I did look it up before I posted, too. I’m never 100% sure.


I never really got acquainted with PowerShell. It always felt very clunky to me. But it exists, it's mature, it's cross platform and widely supported.

Maybe I should give PowerShell Core a try.


there's fair amount of comments here (and in other similar posts) which read like:

'nushell is basically a powershell something' 'bash/zsh is better/worse than pwsh/etcshell'

this is very unfair because pwsh allows control over of the all .NET/COM+ available in a system (well, .NET Core only on non-MS, but still).

while nushell stands on its own shoulders or if u like - allows control of other programs executed, but would not easily call into any API that is a language-specific one nor parse its output as structured one.

so when comparing bash, zsh, nushell, pwsh one should take into consideration that these have different foundations and goals even. like perl stands on top of CPAN, all the JVM stuff on top of all classes, etc.

although it is difficult to say why no one has created (to my knowledge) reasonable JVM-based shell, nor a python-based-one, since java targets such a large number of OS/platforms and provides such a broad library. perhaps because people assume that a shell program should follow the UNIX concept of one-program-does-one-thing-only. on the contrary - it would be a killer feature to have nushell be able to natively execute java/python/whatever API calls and use the structured output (like pwsh indeed).

in essence - is very limiting to compare bash to pwsh only in the sense of 'how much can be done with only few characters', because this always leads to very opinionated and biased arguments and eventually discussions taken out of context.


@joseph8th mentioned a shell [1] earlier, that at least at first glance, seems to be a python + bash combination, similar to what powershell does with .net.

[1] https://xon.sh/


I wonder if the pipelining could be added to a regular shell to get its advantages - the cost for me from moving from Bash is too high.

Basically a "cols" command on steroids - making that have some kind of context aware column names (I'm not sure how) and then support expressions would make it work in any shell.


That's basically what Oil's approach is. Oil is POSIX and bash compatible, but it also has Python-like data structures (recursive dicts, lists). And there's also room to add select/filter/sort. We need help :)

https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil


I played with it recently. The interactive part is limited at the moment, but I really like the idea of having operations like sorting and filtering separate from the tools which produce the data. Having these things consistent and available everywhere is a huge cognitive win. I can finally sort and filter `ls` output without having to read the man page. :) "Querying" data feels a lot like SQL in that respect.


Were you not able to before? ls | grep foo | sort is hardly something people normally read the man page for.


This isn't the way I'd sort ls output by date or whatever field. I'd use `ls -l --sort=date` which I guess is what the GP meant with having every tool do their own sorting by field.

In nushell one can just do `ls | sort-by size`


Yes, that is what I mean.

I also like that nushell's table oriented approach displays the column names, and you just use those names for the `sort` command or `where` command etc.


Ah, fair enough. I misunderstood the parent.


Now sort by filesize


File size or file length ?


What is better about this tool than PowerShell?

The README doesn't mention it at all, just that the tool is inspired by PowerShell.


Nothing. Lets be realistic.

Even if it goes mainstream it will be decade behind PowerShell in conceivable future.

While I appreciate the enthusiasm for developing shell, nothing usable here in years to come IMO when I can just use pwsh.


I use this in combination with fish shell and it's been working really well. I can just switch over to `nu` when the task merits; no need to replace my current shell. I can also just run a one-off line like this.

   nu -c "ls | where size > 10b"


I see the value proposition of nushell. I'm wondering if I should also try fish.

May I ask if you switched to fish from zsh? What motivated your change?

Background: I invested some time not too long ago to read zsh docs in detail and customize my own configuration (e.g. instead of using oh-my-zsh). Since then, I've been quite happy with zsh. That said, I'm also open to switching to fish and/or nushell based on recommendations.


I didn’t switch from zsh, but from bash, to fish, primarily because bash was occasionally a bit clunky to use, and there were a lot of things that could be optimized in a modern environment. I settled on fish because it suited what I was looking for: an ergonomic shell that worked well out-of-the-box. Paired with strong community support, clean scripting syntax, and a wide ecosystem, it makes for a really enjoyable shell experience. I haven’t tried zsh, so I can’t compare the two.

I’m happy with fish, and I don’t see much benefit to switching to nu. nu is exceptionally good at one thing: working with data, but lacks features in other areas (auto-completion, scripting, etc.). With time, I can see these features being implemented, but I think they’ll be re-inventing the wheel in a lot of areas that other shells are already good at.


> what motivated your change?

didn't switch from zsh but bash, did it because wanted a bit more features at shell without additional configuration


How does it compare to fish shell (especially fish’s great autocompletion), e.g. why are you not switching completely?


nu is still in its infancy, and currently lacks critical features of other shells, which is why I haven’t switched completely.

As for the future, perhaps a bit brazen, but I’m confident that other shells will introduce the core feature of nu in the near future to stay competitive. I can see fish having a “||” operator and rewrites of a few gnu functions to achieve what nu does natively.


This seems like a missed opportunity to call it "nutshell"!


Looks pretty cool.

Anecdotally, I found switching shell to be more of a challenge than expected - I’ve been using (oh-my-)zsh for years and decided to try fish, but all the little differences were too annoying for me to get used to - I guess you build up a lot of “muscle memory”! That said, I probably just use the same commands most of the time. If you were doing more advanced stuff maybe it makes more sense to invest the time


I think you can achieve most of the fish features in zsh with the help of plugins. The most compelling feature of fish was history substring search. You type something, press up button and you get a command in history with that substring. I used 'history-substring-search' plugin to achieve that. There are also 'syntax-highlighting' and 'completion' plugins which bring fish shell features to zsh. I use prezto for its speed after getting annoyed by oh-my-zsh slowness.


Good tip, thanks!


Shameless plug: zoxide recently added support for Nushell. It's a smarter cd command for your shell (similar to z or autojump).

https://github.com/ajeetdsouza/zoxide


It's such a huge pain to set up bash on windows (it worked fine, very easy on one laptop but I cannot get bash or wsl working on another newer laptop) that I wrote a very basic wrapper for shell scripts that will transform arguments of Linux user land (like cp, mv, rm) to their windows equivalent. This way all you need is python and your "bash" scripts can just work. Sorta. There are a lot of edge cases. But at least it works enough that I can stop trying to figure out how to install bash on windows or what's wrong with my laptop.


I installed wsl on like 10 machines, including "newer laptops", and never had problems with the setup. Instead of solving your actual problem you created a whole new problem domain, that seems... not like the way to go


The "domain" I care about is for basic build scripts to work on Windows and Linux. Whichever way gets me back to regular work is the best way! I do not have the experience to debug when BoW or WSL do not work unfortunately.


while it can be a pain to deal with windows quirks, writing a wrapper script to deal with low level file IO stuff sounds like a good way to open a whole can of worms and irrevocably mess up your whole system


I had problems with WSL on my newer laptop, and it all came down to disabling Secure Boot.


Oh, thanks for sharing. Maybe I'll try that.


What was the problem with WSL? It's been trivial to install on every machine I've tried: work, personal, old, new, upgraded from Windows 7, local account, Microsoft account, etc.


Yes I had that experience until this one laptop, ironically it's a surface laptop. Most likely I screwed some setting up but now no tutorial for bash in windows or wsl works anymore and I have no idea how to work around.


Shameless plug: if you're interested in nushell, you might also be interested in Elvish: https://elv.sh

(And the various other new shells, documented by the author of another new shell, Oil: https://github.com/oilshell/oil/wiki/Alternative-Shells)


This looks pretty awesome. Wonder if it could be used with a command rather than as a new shell.

Tangent: How is this at the top of the front page with a dozen upvotes and no comments? Maybe these upvotes all occurred at once?


It's not uncommon for people to upvote without commenting if they don't have anything to actually add or the post is self-explanatory


Software releases commonly get this treatment, I remember the comment about why there was no discussion on the actual contents of the change on a decently upvoted post about a minor release of wine, for example.


A lot of people just upvote anything that is a GitHub repo because it's generally more useful than actual news


You can with `-c`:

    $ nu -c "sys | get host.name"
    > Ubuntu


On one hand I do like the idea that it's a shell since it means in it's nicely integrated with itself (can't think of a better phrase), but on the other hand it's also what's making me a bit reluctant to seriously consider it because I'm too used to fish.

It would be ideal to me if it were a tool that could be piped into.

However, that's not to say I won't at least give it a try these days, since I'm really curious about it.


I will sometimes upvote something without comment if I found it interesting, don't really have anything useful to say myself, but would be interesting in seeing what other people have to say.


This tracker was a show HN last month, looks pretty normal https://upvotetracker.com/post/hn/27525031


Here is a recent video on Nushell 0.32 features: https://www.youtube.com/watch?v=AqYxhJKblvY


Seems pretty well designed. Though the decision to be able to refer to the properties of data in a pipe directly with their names might cause some confusion in the end (the 'size' in 'ls | where size > 1'). Ok it's succinct, but it's also a bit weird that the loop variable (what this basically is in a pipe) is treated different than all others. Not that e.g. PS' $_ is so great, but at least it's obvious: it's a variable just like all others.


This is a feature that Powershell gained and it is very useful.


My issue with these types of things is always the same. Awesome it does all this stuff I love it, I use it everywhere.

Dev: I have decided to stop working on this project thanks for all the support. If anyone wants to take ownership of it I am willing to transfer its ownership. (Crickets)

Me now after using it for a couple of years have to go back to standard bash or whatever and now people think I have no idea what I'm doing since I forgot so much stuff from lack of daily use.


I don't get these sorts of efforts.

'Let's make something complex appear intuitive by adding more complexity'.

I really respect the work that has gone into this project, but no thanks.


But that can work. Just look at Google: massively complex product and problem, but sure appears pretty simple to the layman - I just type some words into the box and hit enter!


Shameless plug: https://github.com/ngs-lang/ngs - Next Generation Shell

Compared to other modern alternatives, NGS is "programming language first".

Repository of scripts written in NGS, aptly named "NGS Scripts Dumpster" - https://github.com/ngs-lang/nsd


When I think of a new shell I want to understand quoting and the special meaning of characters. Bash has found a decent compromise; what does nu plan to do? The documentation is pretty scant, but perhaps that’s because the language is still developing: https://www.nushell.sh/book/types_of_data.html#strings



That program is for testing terminal emulators, not shells.


Ah, true. Got confused there.


Sort of looks like a modern take on VMS!


It looks very interesting, but the first thing I noticed as a macOS user, is the shell has a built-in command "open" which I expect will conflict with the standard macOS command "open" and I haven't found any docs that describe how to deal with this.


This is true of shell built-ins for eg. test too.

The solution is to use the full path if you want to use the program, not the shell built-in.

In this case, "/usr/bin/open" not "open" will get you the macOS utility. If you get sick of doing that, create a shell alias.


Its not as yet feature complete but has come a long way from its earlier beginnings. I have been using it on Windows and being native and close to UNIX shells really helps my workflow.

The inbuilt commands and structured philosophy is a great approach.


Could someone let me know if https://github.com/akavel/up works ok with it? (on Linux or Mac or WSL) I'd be grateful for feedback!


Similar to "bourne shell" and "bourne again shell" aka bash

I can see the evolution now. "nushell". Next iteration: "nuer shell" (newer) or "renushell" (renew)


Nushell is a new shell. In a nutshell, nushell handles pipes like powershell in a structured data type instead of a string/stream based pipe.


Nushell missed the one feature that every Unix shell has: if it doesn't recognize the first word as a built-in command, look it up and run it as an executable. It's crucial, and nushell is missing out on an ecosystem of random shell commands to complement it as a result. This, of course, is what makes the Unix shell so great: the many little programs able to work together via pipelines.

This makes nushell an interpreter command line, instead of a system shell.

(Yes, you can "escape" by typing ^ before a word. That's not nearly the same thing.)


Its the 2nd paragraph

"If a command is unknown, the command will shell-out and execute it (using cmd on Windows or bash on Linux and macOS), correctly passing through stdin, stdout, and stderr, so things like your daily git workflows and even vim will work just fine."


Argh, how did I miss this. The ^ is just for overriding built-ins with external executables. Apologies!

Clearly I hadn't tried nushell -- this feature I thought missing was a big no for me. I have now, and this is definitely worth a try. Thanks all!


This is how Nushell works. If the command isn't an internal command, it will run it from you path.

Curious what you tried that didn't work.


Heck yah - Very NICE! The demo is what hooked me.


On the face of it this looks like powershell


I use fish for general use, but still write all my scripts with bash. It has worked very well for me.


It's basically powershell


seems nu can do a lot of otherwise horrid sed/awk/cut/etc parsing for bash. that is great!


"A nu type of shell"


That brings a WMI vibe.


What's WMI? Google search says https://docs.microsoft.com/en-us/windows/win32/wmisdk/wmi-st... but I doubt you were talking about that.



This seems very similar to powershell, with far better syntax.


Will fork it as nutshell


It would be nice if the syntax could be very forgiving. So typing something such as 'show me the files over 12 mb' would work. Initially when I read the README it's what I thought it was.


I stopped reading at "rather than thinking of services and input as a stream of text", horrified. Why would we do that ? KISS !


By the same reasoning: Would you not use databases because they have a schema? or JSON...? Heck, even utf-8 could be too complicated.


You have a point. That sounded like pure conservatism, and to some extent it is. But still, text is simple and that what makes shell powerful, trying to move to higher level abstraction is has a chance to create fragmentation and break the solid foundations on which Unix is built.


Because parsing text is shit. Good luck AWK'ing everything. Datastreams are simple.


It's often better than trying to figure out the weird formats all the tool would use otherwise


So start outputting something like json. And render it in the console as an ascii table




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: