Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Suicide Linux (2011) (qntm.org)
71 points by josephmx on April 18, 2015 | hide | past | favorite | 55 comments


"When there is error, we blame humans. But it is not humans who are stupid, it's the machines. We should design machines to minimize error. Humans are bad at precision so why demand it?" -- Don Norman, Design of Everyday Things


On the same general line: given a shell, which could analyze frequency of commands & make intelligent guesses for non-executable / invalid commands ("do what I intended, not what I've typed), I would toss my bash so hard against /dev/null, and salt whatever remained of it's config files.


Machines have to become much more intelligent before I would trust them with "do what I intended" if there is any chance that I lose my data if the machine misinterprets my intention. Leaky do-what-I-intended abstractions are the worst.


Autocomplete and autocorrect are "guess what I intended" functions. The problem with shells is that the UX is crap: User actions are often executed immediately (review time between typing and executing is a fraction of a second), and most importantly there is no undo. This makes guessing a bit dangerous. However, the fault is with shells, not with guessing. If you never instanty execute anything without review then the issue nearly goes away (more like regular programming than a REPL).


If it was less "take the whole of what I typed, rephrase it into something, and silently execute that", and more "fuzzy-autocomplete each token as I type it, and interactively lint the syntax to balance parens et al., so that by the end every token is almost certainly valid—and I can still then proofread what's there before executing it" then it's not nearly as scary.


Ideally, it would bias strongly towards blocking destructive activities, and hinting but not executing inferred commands.

Perhaps with something like "really" (where usage is similar to sudo) that would both override and train the fuzzy guessing. Training times might be a problem, even with a good algorithm, but perhaps if you got enough people to donate bash histories.


Did you see this tool that came up on HN? https://news.ycombinator.com/item?id=9396116

I think the general concept is sometimes called "DWIM" (Do What I Mean).


I dishonoured my hacker mentors.

I must commit rm -rf /


These days you'll actually need to commit rm -rf --no-preserve-root /


Please respect the value of tradition.


Only when using GNU rm and its descendents, as far as I know. FreeBSD and OS X's rm neither require that option nor support it. Unless you're asserting that psgbg has never used a system with BSD userland (which includes some running linux kernels) and none of psgbg's theoretical hacker mentors have either, it's not a safe bet they'll need that flag (or be saved by its lack). If they'd stated they were manually commit rmicide on a linux box it'd be a different situation, but assuming tools have safety nets is one of those things that happens to sysadmins. Sadly it's done to us at least as much as we do it...

All that aside every unix person who wipes a box for a reinstall should try rm -rf / and see what works and what doesn't. It teaches you a lot about shell builtins, filesystem dependencies, binaries still running and thus hanging around in /proc & its equivalents (which can save you), and weird stuff like /dev/tcp/ and /dev/udp/ (if your shell has them and they're enabled). A Linux or BSD machine can often be recovered live from deleting everything but /home, say.


> Only when using GNU rm and its descendents, as far as I know. FreeBSD and OS X's rm neither require that option nor support it.

Indeed. Although I would argue that since we are talking about Suicide Linux, you would in all likelihood need the flag


Immortal Linux would probably get more installs - a package that prevents common malicious commands from working without first warning you of the consequences.


People would just be more creative; a curl url | sh, that script copies rm from bin to current directory, executes it there etc.

Might help in a few cases, but would lul people into a false sense of security.


Just make it work at the kernel syscall level. Maybe make it so you can have policies where syscalls can only affect some directories.

Maybe we should call it "selinux"


Of course, but that has a reputation as being hard to set up into a configuration that doesn't get in the way.


Ah, yes, and not allowing deleting files or changing permissions and so on would give you security without ever getting in the way. Silly me.


I read that as Immoral Linux and immediately started thinking about what immoral actions the OS could automatically perform.


I'm personally waiting eagerly for filesystems that let me roll back my mistakes.


Hammer is very much like having your entire filesystem in git.

https://www.dragonflybsd.org/hammer/

"HAMMER retains a fine-grained history. The state of the filesystem can be accessed live on 30-60 second boundaries without having to make explicit snapshots, up to a configurable fine-grained retention time.

A convenient undo command is provided for single-file history, diffs, and extractions. Snapshots may be used to access entire directory trees."

edit:

Just now, 10 minutes after posting this, I regretted deleting a file (cabal.sandbox.config) and was able to restore it:

    > rm cabal.sandbox.config // oops
    > undo -i cabal.sandbox.config
    cabal.sandbox.config: ITERATE ENTIRE HISTORY
	    0x000000010bf2e940 17-Apr-2015 06:50:48
	    0x0000000112282990 18-Apr-2015 20:36:53
    > undo -t 0x0000000112282990 -o cabal.sandbox.config cabal.sandbox.config


That's seriously cool. What's the stability/speed like?


Looks quite good: https://www.dragonflybsd.org/performance/

I'm not really sure how to benchmark it myself, those are from 2012 and I'm sure things have improved since then.


On Plan 9, the Fossil file system, almost invariably paired with the Venti archival storage server, do precisely that: http://man.cat-v.org/plan_9/4/fossil


You could replace rm with an alias to small script, e.g. mv to ~/.trash rather than deleting the target. The trash can be auto-cleared based on space constraints or after a certain time.


libtrash does this by hooking libc so it works for anything that deletes files (unless it's statically compiled...) - it's been around for ages. http://pages.stern.nyu.edu/~marriaga/software/libtrash/


Sadly, the local system is only a small part of what you can screw up. It becomes the undo-email problem pretty quickly.


ZFS


Nice try.


ZFS is production ready. Use FreeBSD!


Lately I've been advising people to use find -delete rather them rm.

Ever tried to clear the current directory with rm? It's actually amazingly tricky and dangerous to wipe out dot files without moving up directories. Whereas find has the benefit of saying exactly what it'll do before it does it (and being a bit faster IMO).

The thing I'm really missing in find is a native -chown and -chmod flag now.


rm . * (extra space) has gotten me a few times. I bet I'm not the only one either.


I once wanted to delete backup files left by emacs in the current directory. Instead of

  rm -rf *~
I typed

  rm -rf * ~
At least I learnt to use -rf more carefully. Way more carefully.


Reversely, I wonder why `rm` hasn't been patched to abort when the `/` path is passed as an argument (with maybe a --no-really-i-mean-it argument for the really rare cases where that's what you want)

EDIT: not sure why this is being down-voted. It's just an idea, albeit possibly a bad one.


GNU rm has exactly that built-in:

    $ rm -rf /
    rm: it is dangerous to operate recursively on ‘/’
    rm: use --no-preserve-root to override this failsafe


Go on, fire up a cheap VPS somewhere like DigitalOcean and run "rm -rf / --no-preserve-root". Maybe I am too easily amused but as a long-time Linux user on the desktop this is one of those things I always wanted to try. After it runs, you can't even shut down or run anything else, you just kill the SSH session.


The problem with this kind of safety nets is that it may break scripts that rely on this to work.


How many daily-use scripts do you suppose intentionally run `rm -rf /`? Cost-benefit for common use seams clearly in favour of adding a failsafe.


It's a good thing you can fix the script by adding that flag into them.


I think this is kind of antithetical to the Unix philosophy.

If you add special exceptions for things like "rm -rf /" then you start to wonder, why not add exceptions for other dangerous things, like "find / -delete" and "rm -rf /usr".

In general, most of the basic Unix tools operate off of relatively simple first principles and don't contain exceptions for things like this.


When young I once had the opinion that such exceptions would be a good thing.

That is, until I installed some version of Red Hat that aliased rm to "rm -i", and extracted a few wrong tar files. Then I understood why the shell is that way, and why everybody just clicks "ok" on Windows dialog boxes without reading the alerts. Funny thing is that I lost some important files because I expected the prompt, but pressed "y" 19 times, instead of 18...

Nowadays I Just do backups.


First off, as someone already pointed out GNU rm does fail on `rm -rf /`. Secondly, that one is way more important than trying to protect `find / -delete` and `rm -rf /usr` because it's just way easier to mess up and have a stray slash in your command line.

Case in point, a unix novice coworker of mine came up to me once and said, I think I might have done something wrong. I'm trying to remove an empty directory and it's hanging. Turned out he created a directory in whatever linux workspace gui he was running at the time and accidentally added a space at the end. He didn't notice until he used terminal `ls` to look at it and then noticed it printed like this (with `ls -F`):

  somedir /
So he decided to rm it and start over. Can you guess what he typed?


For the record, GNU's Not Unix.


tarsnap forces you to type "No Tomorrow" when you try to delete all your backup. Something like that would be great.


It is interesting the number of things that sharpen when there is risk on the line. We've already discussed how narrow streets without signs are safer for pedestrians, but its true for a lot of things. The trick is to make it really painful but not fatal.


Do you have a link to the discussion on narrow streets? Sounds ridiculously interesting. At first thought, I can't help but think this is something that doesn't scale. Thought being that it only works because a unique situations puts drivers into alert. Not sure how I could test that, though.


A common issue, is stuff like:

cd $TempDIR

wget http://example.com #Do stuf in temp dir

rm -R . #Clean Tempdit

if $TempDIR is not set, it will delete your home directory.

If $TempDIR does not exist or the cd somehow fails it will delete your current folder.


It's just a huge design mistake to treat "missing" like an empty string in env vars. In many cases referencing a missing var should be an error, not a silent conversion to an empty string.

It's funny how many of us would never use a weakly/stringly typed language to handle our important data, but then we happily use bash to deploy or manage the systems on which our apps run.


And if #Do stuf in temp dir happens to be one of those mosters which change working dir you could be anywhere. Having the directory shown in your prompt solves most of that though


We have used this as a ritual/test for new members in a student organization. Given a task, try to achieve it, if you fail, empty your drink.


I wonder what the usecases are for this package

edit: maybe install it as a prank on your colleagues VMs


I heard somewhere that it should be rm -rf /<star>


The -r enters the directory recursively, so it's the same result.


Not these days; rm refuses to recurse on root. This can be overridden with --no-preserve-root or by appending a *.


rm -rf /reposts ?


It looks like Sam made a (very) small update today; I'm assuming this prompted OP to post it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: