"When there is error, we blame humans. But it is not humans who are stupid, it's the machines. We should design machines to minimize error. Humans are bad at precision so why demand it?" -- Don Norman, Design of Everyday Things
On the same general line: given a shell, which could analyze frequency of commands & make intelligent guesses for non-executable / invalid commands ("do what I intended, not what I've typed), I would toss my bash so hard against /dev/null, and salt whatever remained of it's config files.
Machines have to become much more intelligent before I would trust them with "do what I intended" if there is any chance that I lose my data if the machine misinterprets my intention. Leaky do-what-I-intended abstractions are the worst.
Autocomplete and autocorrect are "guess what I intended" functions. The problem with shells is that the UX is crap: User actions are often executed immediately (review time between typing and executing is a fraction of a second), and most importantly there is no undo. This makes guessing a bit dangerous. However, the fault is with shells, not with guessing. If you never instanty execute anything without review then the issue nearly goes away (more like regular programming than a REPL).
If it was less "take the whole of what I typed, rephrase it into something, and silently execute that", and more "fuzzy-autocomplete each token as I type it, and interactively lint the syntax to balance parens et al., so that by the end every token is almost certainly valid—and I can still then proofread what's there before executing it" then it's not nearly as scary.
Ideally, it would bias strongly towards blocking destructive activities, and hinting but not executing inferred commands.
Perhaps with something like "really" (where usage is similar to sudo) that would both override and train the fuzzy guessing. Training times might be a problem, even with a good algorithm, but perhaps if you got enough people to donate bash histories.
Only when using GNU rm and its descendents, as far as I know. FreeBSD and OS X's rm neither require that option nor support it. Unless you're asserting that psgbg has never used a system with BSD userland (which includes some running linux kernels) and none of psgbg's theoretical hacker mentors have either, it's not a safe bet they'll need that flag (or be saved by its lack). If they'd stated they were manually commit rmicide on a linux box it'd be a different situation, but assuming tools have safety nets is one of those things that happens to sysadmins. Sadly it's done to us at least as much as we do it...
All that aside every unix person who wipes a box for a reinstall should try rm -rf / and see what works and what doesn't. It teaches you a lot about shell builtins, filesystem dependencies, binaries still running and thus hanging around in /proc & its equivalents (which can save you), and weird stuff like /dev/tcp/ and /dev/udp/ (if your shell has them and they're enabled). A Linux or BSD machine can often be recovered live from deleting everything but /home, say.
Immortal Linux would probably get more installs - a package that prevents common malicious commands from working without first warning you of the consequences.
"HAMMER retains a fine-grained history. The state of the filesystem can be accessed live on 30-60 second boundaries without having to make explicit snapshots, up to a configurable fine-grained retention time.
A convenient undo command is provided for single-file history, diffs, and extractions. Snapshots may be used to access entire directory trees."
edit:
Just now, 10 minutes after posting this, I regretted deleting a file (cabal.sandbox.config) and was able to restore it:
On Plan 9, the Fossil file system, almost invariably paired with the Venti archival storage server, do precisely that: http://man.cat-v.org/plan_9/4/fossil
You could replace rm with an alias to small script, e.g. mv to ~/.trash rather than deleting the target. The trash can be auto-cleared based on space constraints or after a certain time.
Lately I've been advising people to use find -delete rather them rm.
Ever tried to clear the current directory with rm? It's actually amazingly tricky and dangerous to wipe out dot files without moving up directories. Whereas find has the benefit of saying exactly what it'll do before it does it (and being a bit faster IMO).
The thing I'm really missing in find is a native -chown and -chmod flag now.
Reversely, I wonder why `rm` hasn't been patched to abort when the `/` path is passed as an argument (with maybe a --no-really-i-mean-it argument for the really rare cases where that's what you want)
EDIT: not sure why this is being down-voted. It's just an idea, albeit possibly a bad one.
Go on, fire up a cheap VPS somewhere like DigitalOcean and run "rm -rf / --no-preserve-root". Maybe I am too easily amused but as a long-time Linux user on the desktop this is one of those things I always wanted to try. After it runs, you can't even shut down or run anything else, you just kill the SSH session.
I think this is kind of antithetical to the Unix philosophy.
If you add special exceptions for things like "rm -rf /" then you start to wonder, why not add exceptions for other dangerous things, like "find / -delete" and "rm -rf /usr".
In general, most of the basic Unix tools operate off of relatively simple first principles and don't contain exceptions for things like this.
When young I once had the opinion that such exceptions would be a good thing.
That is, until I installed some version of Red Hat that aliased rm to "rm -i", and extracted a few wrong tar files. Then I understood why the shell is that way, and why everybody just clicks "ok" on Windows dialog boxes without reading the alerts. Funny thing is that I lost some important files because I expected the prompt, but pressed "y" 19 times, instead of 18...
First off, as someone already pointed out GNU rm does fail on `rm -rf /`. Secondly, that one is way more important than trying to protect `find / -delete` and `rm -rf /usr` because it's just way easier to mess up and have a stray slash in your command line.
Case in point, a unix novice coworker of mine came up to me once and said, I think I might have done something wrong. I'm trying to remove an empty directory and it's hanging. Turned out he created a directory in whatever linux workspace gui he was running at the time and accidentally added a space at the end. He didn't notice until he used terminal `ls` to look at it and then noticed it printed like this (with `ls -F`):
somedir /
So he decided to rm it and start over. Can you guess what he typed?
It is interesting the number of things that sharpen when there is risk on the line. We've already discussed how narrow streets without signs are safer for pedestrians, but its true for a lot of things. The trick is to make it really painful but not fatal.
Do you have a link to the discussion on narrow streets? Sounds ridiculously interesting. At first thought, I can't help but think this is something that doesn't scale. Thought being that it only works because a unique situations puts drivers into alert. Not sure how I could test that, though.
It's just a huge design mistake to treat "missing" like an empty string in env vars. In many cases referencing a missing var should be an error, not a silent conversion to an empty string.
It's funny how many of us would never use a weakly/stringly typed language to handle our important data, but then we happily use bash to deploy or manage the systems on which our apps run.
And if #Do stuf in temp dir happens to be one of those mosters which change working dir you could be anywhere. Having the directory shown in your prompt solves most of that though