Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Ultimate Plumber – a tool for writing Linux pipes with live preview (github.com/akavel)
484 points by akavel on Oct 24, 2018 | hide | past | favorite | 217 comments



Careful with rm:

    r
    rm
    rm -
    rm -rf
    rm -rf ~
    rm -rf ~/
    rm -rf ~/tmp
    # where did my files go?


Right, it's theoretically dangerous, or I'd say practically: "please think when using it"... though I personally couldn't find any other serious problematic example other than typing something clearly dangerous in the first place. And I personally found it immensely useful in the other, non-dangerous cases. So I just took the chance and went with the idea, and then wanted to share it with you, so you might also find it useful, maybe! :) Also... umm... you know, a tool isn't really truly Unixy if you can't hurt yourself with it, right? ;P

That said, I'm starting to think — should I include a note about the potential dangers in the readme?... they seemed glaringly obvious to me (at least in case of rm-like commands), but can there be readers/users, to whom they'd not be obvious?...

edit: Ok, I tried to add a warning in the Readme, that's something I can do easily, and if it helps someone, worth it...


I'm not so versant about the state of permission / capability systems on Linux.

But if there is a way for you to drop the capability to modify the filesystem before running the first pipeline, you should definitely do it by default (and provide a switch to override it if somebody knows what they are doing.)


It seems like a whitelist of safe commands would be a good idea? The user can override with -f if they want to live dangerously.

Even rm has a -f flag to override some safety measures, though the defaults aren't very safe.


Even better, put it into a non-privileged user by default with no write permission.


Sounds interesting; but is there any way I could change to a different user if not running as root?... also, even if yes, how do I dynamically create a non-privileged user, or find a pre-existing one?


I would try creating a non-privileged user "up" during installation and switch to that user under the hood before running the command via `su`.

(Though it would complicate installation which is effectively non-existent since there is a single binary; and tightly couple the application with the environment. It's a trade-off anyway.)

(Edit: Somebody mentioned the user "nobody" which seems to be a better alternative.)

In any case I think that the whitelist approach would work better as it would be nice if the tool knew in advance which commands work properly and which ones don't. That way it could inform its users about the allowed & unallowed commands.

Commands with side effects (or "impure" in the FP terminology) don't make much sense anyway to be used within this. The main value is fast iteration, in order to verify the expected output of the pipeline. For me it doesn't make much sense to use it with commands whose main purpose is to modify the filesystem instead of generating / transforming data and writing it to stdout.

So the criteria for the commands that would make it into the whitelist may be "not having side effects", "writing to stdout", and optionally "reading from stdin".

Fastly iterating side effects is unsafe as well, as has already been pointed out.

This can be a fine data analysis tool actually, unfortunately only for command line geeks. The experience is actually not so far from analyzing data with VIM interactively by piping it to an external UNIX command and getting the result dataset back into the editor. VIM hackers will know :)


One thing to watch out for is that a process running as "nobody" has permission to manipulate (e.g. kill) other processes running as that user. This sometimes includes system processes. Allowing any user to run arbitrary commands as nobody is technically a privilege escalation, and therefore should be avoided.

A single-purpose user with its own group would have this problem only to a lesser degree (you'd be able to mess with other users' "up" invocations, but not any system processes).


I didn't know this (I'm not much of a sysadmin), thank you for the information!


Could you make a `[caller]_up` user for each different user?


I actually like the user idea better. The problem with a whitelist is that even useful commands can have subtle or little known dangerous modes, like "find . -exec rm {}"


Ah. Good point. Whitelisting commands would have already been a bit painful, and now your comment shows that the parameters need to be whitelisted/blacklisted as well, which would be crazy.

In a world in which shell commands respected the UNIX philosophy, "find" wouldn't have a silly option like "exec", and other commands wouldn't mix read / write / pure data transform operations in a single command.

But it is what it is. So yeah, protection probably needs to be implemented in the user level, for maximum safety.

Maybe an alternative and/or complementary solution would be to profile each inputted command to detect if they are attempting write operations (maybe with "strace" or something like that), and cancel the evaluation of the command in the next iterations and/or show a warning.


> The experience is actually not so far from analyzing data with VIM interactively by piping it to an external UNIX command and getting the result dataset back into the editor. VIM hackers will know :)

Do you mean something like this?

   :r! lshw | grep network -A2 | grep : | cut -d: -f2- | paste - -
I'm not versed well enough in vim scripting but I suppose there's a way to loop on that on each <CR> or even keypress (like fzf/ctrlp).


> Do you mean something like this?

    :r! lshw | grep network -A2 | grep : | cut -d: -f2- | paste - -
Not exactly. More like:

- Open vim with the output of `lshw` as content:

    lshw | vim -
- Examine the raw data

- Send the whole content of the buffer as standard input to the given command and rewrite the buffer with the data read from the standard output of it:

    :%! grep network -A2
- Examine the returned dataset. Iterate:

    :%! grep :
- Examine & iterate:

    :%! cut -d: -f2-
- Examine & iterate:

    :%! paste - -

This way, you can examine the output of each step of the pipeline individually, so you can construct your command incrementally. And it is up to you to decide at which point a command will be run instead of it being run automatically following every keypress.

Since the dataset returned by the last command will be visible in the current buffer, you will be able to examine & play with it full screen. You will be able to clean or transform some parts manually (this is frequently needed in data science).

You can always return to the previous/next dataset by pressing u/Ctrl+R (for undo and redo), and examine your command history by pressing ":" and then Ctrl+P/Ctrl+N. (Or you can open the command history quickfix menu by pressing "q:" to view/copy the last several commands at once.)

And since you are in a full blown text editor, you can take advantage of other helpful features such as folding, saving the current dataset to a temporary file, etc.

If you are comfortable with more than a few UNIX filters, VIM can be a very convenient and fun tool to play with data.


Nice tool! Your main use case for destroying files is “xargs rm”, which would be used when receiving a piped argument list.

Could you have this command run as user “nobody” by default, and have a flag to run as a real user? Or as others have mentioned, use capabilities on the spawned processes to prevent anything other than standard input/output?


This is one of the main reasons that I ultimately decided for my Strukt.app to not be built on top of built-in shell commands. For an interactive tool, it's crucial for each operation to be able to report whether it's idempotent, does writeback, etc.

Strukt doesn't do 'rm' yet (and nobody has ever requested it), but it does have other operations that do writeback. Once you add one, automatic updates are disabled, and you have to explicitly run the pipeline each time you want it run.

We really need a new cross-platform low-level framework for lego-like operations, like shell commands but updated for the 21st century.


PowerShell has a design answer for this - each cmdlet takes parameters, but the environment provides common parameters relating to loging, error handling, storing output in variables, and it provides support for "WhatIf" and transactions.

WhatIf instructs the command not to make any changes, but merely to log what it would do. You can "Remove-Item C:\Windows -Force -WhatIf" and it will output "What if: Performing the operation "Remove Directory" on target "C:\windows"" but not delete anything.

(implementing support for "whatif" is optional, so you can't casually use it on any given cmdlet without first checking that cmdlet supports it and makes sensible use of the parameter instead of ignoring it, but the idea is there).


Yep, that's basically the solution that I came up with, too. I have a "dry-run?" flag planned, if I ever end up with enough writeback operations where it'd be useful.

(That's the common Unix shell name for it, though it's not entirely consistent -- which is another problem with using Unix built-ins!)

One thing I've noticed is that it's really frustrating (and not at all helpful) to require perfect validity from a shell-like tool. There's just too many times where you want to edit something into something else, without needing to find a perfect path of valid and meaningful syntax at every intermediate step. So if you have a flag on an operation that doesn't understand that flag, the flag will have a (?) badge to indicate that it won't be understood or used, but the pipeline will still run.


I feel this app should have a limited free mode, like "10" results or something. I'd love to try it, but just trying for $24 seems like a gamble, as I am not sure I'd like it or not.


I found a new variation recently. Luckily all my code was checked in.

Meant to type:

    rm -rf node_modules && npm install
Question: what's immediately next to the & key on a keyboard?

Answer: *

Actually typed

    rm -rf node_modules ** npm install
Oops.


If you’re on Mac OS X , try installing the “trash” app (from brew iirc) and retrain yourself to never use bare rm anymore. There’s probably similar tools on Linuxes.

Theory is: rm is dangerous and should not be used day to day.


Using btrfs on Linux I have a job that takes a CoW snapshot of the whole filesystem every 10 minutes just so that I can undo whoopsies. I don't retain all of these snapshots, only the most recent one.


Or simply have local, quick, regular and tested backups.


yikes!

It's always been surprising to me that their isn't a built in "undo" to `rm`

e.g. why doesn't `rm` just move files to /tmp?


You can make a trash command if you want and a cron job to periodically remove everything over x days old.

The default rm doesn't do this because shell users are supposed to be able to figure out how not to delete things they want to keep.

Kind of like not pointing a gun at people who you don't want to kill.

The undo for rm is regular backups.


Journaling works well here too, but in my experience the interface sucks on every platform. Eg, it should be trivial to view recent unlinks. There’s simply more reason to invest in backup software, which has an excellent interface even if it is a “heavy” operation.


That is not what journaling is for. Journaling is for maintaining filesystem consistency. If you want access to recently deleted files that is a different task.

On my system I have snapshots of my home directory created every 15 minutes. Not quite the same thing, but it also helps me when I have incorrectly modified a file.


Well, it’s a loss for us all that the designers were not more imaginative.


> e.g. why doesn't `rm` just move files to /tmp?

And do what when tmp gets full and services & applications start crashing?

When someone implements that undo, someone's going to need to write a tool that really does remove files, really. Will the next guy wonder why really-rm doesn't have undo built in?


What about undo that expires in 1 minutes? Gmail website does that for sent emails (for 5 seconds I guess).


> What about undo that expires in 1 minutes? Gmail website does that for sent emails (for 5 seconds I guess).

Same question as parent: "And do what when tmp gets full and services & applications start crashing?"


You could do what is more or less an industry standard when it comes to this and warn the user that there's not enough room to allow an undo when deleting said file(s) and prompt if they want to delete it permanently instead.

It's not rocket surgery.


I'm never that nervous when doing `rm -fr` on my Plan9 or FreeBSD or macOS systems, which all have frequent snapshots turned on.

We could all just work with WORM drives instead.


I am similarly not worried on my Linux system, which also has frequent snapshots turned on. Instead I worry about bugs in btrfs, but that is another story.


> It's always been surprising to me that their isn't a built in "undo" to `rm`

Have a look at either the `trash-cli` package by Andrea Francia.


Mildly related, I wish rm would check that you’re not unlinking the current directory. I’ve never wanted the behavior when I accidentally invoke it.


alias rm to rm -i (always prompt)

I don't do this because my idiocy must be punished disproportionately. Also, you end up automatically pressing y enter.


sysadmin here, yep, aliases are how I deal with dangerous commands as well.


And when you get used to your carefully curated crutches and have to work on someone else's machine, that's where the fun starts.

Aliases can be way more dangerous; think about what you're doing. This is what backups are for.


This is true, which is why I dont use them much!


You could easily make rm inactive (remove execute flag, move it, ...) and make rm an alias for send to trash (gvfs-trash, trash-cli, or whatever).


That's the first thing I thought and it's mentioned in the readme file itself.

I don't think I'd ever want to use this tool. Every keystroke could be an arbitrary command, and I wonder about the performance of commands with a lot of I/O or processing.

Conceptually it's really neat and I get it, but practically I find things like parameter completion in Fish much more useful.


One of my favorite quotes:

``Remember, a temp file is just a pipe with an attitude and a strong will to live'' - Randal Schwartz [comp.unix.questions]


I never thought of it that way. Framing it that way is ingenious.


pipes used to be temporary files on Unix.

* https://unix.stackexchange.com/a/450900/5132


This looks like a fun and helpful tool! (Perhaps best to ignore the notice from lshw and not run it as root, ha ha. Maybe even only in some kind of sandbox. I bet they could add a --safe flag so it would make an on-the-fly Docker image with a copy of your cwd, and run the commands in there. Then you have different problems I suppose, but still....)

I've never seen `paste - -` before, but that is amazing too! I can't count how many times I've wanted something like that. Surely that will help my own shell pipelines in the future.

Also I've never seen `foo |& bar` before. What a neat idea! The surprisingly-valid creativity of it reminds me of Duff's Device. Has anyone else ever found uses for that idiom?

After so many years working in the shell, it's such a rare and delightful treat to learn new tricks. Thank you for sharing this with us!


It is a pretty neat trick. Kinda hard to Google for though. Here is the bash manual page on it: https://www.gnu.org/software/bash/manual/html_node/Pipelines...


Making a docker container to run the command in is a good idea, but not helpful if the container doesn't have the same filesystem. I love the idea of something like this, but I'm not sure if I would ever use it out of fear of unintended actions.


If you didn't know about paste(1), maybe you don't know about comm(1) either. There are a couple other like that that I typically fail to remember because I use them only every now and then but when I do they usually are a lifesaver.


Looks great. Personally, I have been using a "cache" function to cache output of expensive commands. This has helped me iterate faster on pipelines that call into apis.

  function cache {
      cmd="$*"
      name=$(echo "$cmd" | base64)
      if [[ -f ./cache/$name.exit ]] && [[ $(cat ./cache/$name.exit) -eq 0 ]]; then
          echo "cached"
      else
          mkdir -p ./cache
          eval "$cmd" > ./cache/$name.out
          echo $? > ./cache/$name.exit
      fi
      cat ./cache/$name.out
  }

  cache ls /


I use something similar, created by a friend of mine, called 'bash-cache': https://bitbucket.org/dimo414/bash-cache/src/default/

It hooks specific functions in Bash and keys off of their arguments.


I like this. I currently do the "plumbing" the OP refers to using files. This has generally worked for me, but want to give this tool a test run and see if it's faster than my current approach and this cache command.


I do progressive filtering a lot. It's convenient to have all the intermediate results somewhere temporarily.

What I do is typing the command in one acme window with the working directory, and middle mouse swipe it. Acme shows the result of that command in a +Errors window with the same working directory. I change the name of this new popped up window (to something like `+Errors-cmd`, type a command that operates on the file name `/mnt/acme/$winid/body`, and middle mouse swipe it again. Acme shows the result of the new command under `+Errors` again. Rinse and repeat until I'm satisfied.

I guess this way it gives a convenient temporary place for all the intermediate texts. I can of course edit them in place if I want.


A rare sight, another user of Acme on HN!

You know it probably, but there is also a mouse chord for piping selected text into a command. (2-1)


I was curious what this plumber could be.

2-1 is just a concatenation of two strings and run the whole string. I wouldn't call it pipe.


This is awesome, but I am absolutely bloody terrified of using it on my systems.

As is stated in the README, if you write 'rm' or anything like it... oopsie oops.


Right, that's one problem. The author says:

> But you'd be careful writing "rm" anywhere in Linux anyway, no? Also, why would you want to pipe something into "rm"?

But the thing is that you don't need to intend to pipe to "rm". Maybe you were typing something else, like having a command "rmore" or something. This danger is also not strictly limited to `rm` and `dd`, and you have to be careful that no substrings from the start of the command you intend to write cannot be interpreted as something dangerous.


The author's theory there about user behavior seems dangerously wrong.

I'm quite careful writing command lines because the whole experience of the shell is that of working with sharp knives: you get a lot of power but if you screw up, you'll feel the pain. The point of this tool is to take away a lot of the pain that teaches people caution. Its whole theory is "just try a lot stuff as you explore what you want".

In their shoes I'd look at using some of the container/security magic as a way of nerfing commands. If the on-a-keypress runs work in a way where they can't make changes to a filesystem, that seems way better to me. Even better if the tool then reports, "Would have deleted 532 files in a warning color at the top of the output."


> I'm quite careful writing command lines

A safety measure I picked up from a sysadmin while watching over their shoulder: start writing nifty command lines by prefixing them with # first, to prevent havoc when fat fingering the [enter].


Yes, though I prefer: echo ...

The reason is that you'll see expanded variables and wildcards before commiting to them.


But prefixing with echo only disables the first command in a pipeline, while commenting it disables the whole thing.


It also executes command and process substitutions, opens files etc. But granted, it's a useful trick in some cases, after passing the fat-fingering stage.


You can move the echo to the part you are currently working on.


It's a pity you can't do something similar to SQL: when writing UPDATE/DELETE always write the WHERE clause first


Something I learned early on when writing possible dangerous SQL selects:

  CREATE TABLE mytable_backup AS SELECT * FROM mytable;
  SELECT * FROM mytable WHERE condition;
  DELETE FROM mytable WHERE condition;


Or just let the DB work for you with BEGIN; + (select/update/delete) + COMMIT; or ROLLBACK;


What I like about SQL is that it's got the double safety of ";". To accidentally run a SQL statement before it's ready in a command line, you'd have to both add the ";" AND hit Enter, or have the bad practice of adding ";" before the statement is ready, or use a bad SQL command line that sees the ";" as optional.


I always start a session with BEGIN, write UPDATEs/DELETEs as SELECTs first and (if possible) use a staging database until I'm fine with the result.


In SQL, you can at least wrap the whole thing in a transaction. Then, just roll the whole thing back if anything came out wrong.


I usually SELECT COUNT(*) WHERE ..., and only subsequently overwrite with DELETE.


Thanks for the tip, I'll definitely be using this. Fear of accidental rm -rf * keeps me up sometimes.


Looks like you need to know about the 'sleep' command...


unzip; strip; touch; finger; grep; mount; fsck; more; yes; fsck; fsck; umount; sleep


`fc` helps. Setting `FCEDIT=ed` helps a lot.


This command can be dangerous itself.

For those who don't know... "The fc utility lists or edits and reexecutes, commands previously entered to an interactive sh."


It seems like most of the scariness could go away if only you had to press a key combo each time you wanted to run the command / render the preview


Yes. Like Enter? People might still like the fact that the cursor doesn't need to move from wherever it's editing the command to run it, and can still continue to edit there.


That's the main direction I currently seem to be coalescing on based on feedback I'm getting. That said, one person on lobste.rs suggested trying to play with Linux capabilities, to make the filesystem immutable... now that would be a seriously fancy trick if it works!


Can I say that I hope you don't make this tool too Linux-specific. It seems to run pretty reasonably on Mac and would probably also work on BSD variants. Since I use all three (Mac, BSD, Linux) on a semi-regular basis, I'm pretty unlikely to integrate a tool into my Unix workflows if it doesn't work reliably on most unixes, so if you can look into cross-platform ways to accomplish things, that would widen the audience of people who could get value from the tool.

For example, one way to address the rm problem would be to give it a "safe mode" that drops user permissions and runs as nobody. You might also create a blacklist of common, potentially-dangerous unix commands that you don't execute. Both of those are examples of how to address the need without resorting to Linux-only tricks.


How can I do the "drop user permissions" thing? I haven't seen such a keyword mentioned by anyone yet, sounds interesting. Does this not require some kind of syscall, however, thus not really being cross-platform anyway?


‘nobody’ is an only UNIX thing. It’s just a user and group (like root, $USER, etc) except ‘nobody’ is meant to have no permissions to any folders nor files.

In practice it’s only as good as your users ability to avoid the temptation of ‘chmod 777’ (for example) but it seems a good place to start.

As an aside, I had the same idea to do a tool like this as well. Except mine would have been built into the $SHELL I’m writing (as that has a heavy focus on being more IDE-like than traditional shells). I scraped the plan for pipe previews precisely because of the dangers we’re discussing. However your tool makes a lot more sense because at least that is manually invoked whereas my original plan was to have that feature automatic and baked into the shell - which is an order of magnitude more dangerous. So I’m glad someone else has ran with this idea


:)

I've seen somebody mention the idea of applying this to a shell, but only after pressing some special key (e.g. Ctrl-Enter). Would probably make it un-dangerous enough? This seems to be what people want of up too, anyway.

As to the "nobody", from what I'm reading, it seems you'd first have to be root, to be able to switch to "nobody"... so this doesn't really seem to be useful to me in this case... :/


The other way to do it is to change ownership of the executable to nobody:nogroup and set the setuid/setgid bits.

Perhaps you could simply put those chown/chmod commands in the docs:

    sudo chown nobody:nogroup path_to_up
    sudo chmod ug+s path_to_up
I've tested it, and it seems to prevent deleting files with rm. What doesn't work, however, is that it also prevents writing the results to up1.sh. Perhaps if writing to the file fails (or you detect the process is running as nobody), you could send the finished pipe sequence to stdout instead of a shell script. Then, people could run it like:

    cmd | up > up1.sh


The solution there is not to set any writable bits for the up executable. Then only root will be able to write to it (which is ideally what you want for any tools within /usr/bin (whatever) anyway


That’s an interesting argument having it hot-key triggered. The shell in its current form already supports hit-keys so I could plug right into your tool verbatim that way. The only issue is you call fork shell so any $SHELL specific behaviours of my shell would be lost.

I appreciate this is a personal project and sometimes there is nothing more annoying than having feature requests; but if you did ever decide to add a flag for choosing alternative shells then drop me a message ( raise it as an issue on github.com/lmorg/murex ) and I’ll add ‘up’ as an optional 3rd party plug in.


i fear a blacklist won't be enough. better a user configurable whitelist. every time a command is not on the whitelist, don't run it, but give the user the option to add it to the list instead (with a multi key action, so that it can't be added to easily and by accident)

greetings, eMBee


Consider that the filesystem isn't necessarily the only thing you can mess up by accident. It's much more unlikely, but you could accidentally trigger a bunch of unwanted network requests (e.g. piping into `xargs curl` or something), among other possible things with external effects.


Maybe cutting off network access could be done too? That would reduce some potentially useful features, but it could be a parameter to enable it (hey, we're talking Linux, never enough parameters! ;). That is, assuming that a syscalls/capabilities trick like that is possible at all. (Anyone can say more here? Some kernel hackers? This is HN, ain't it?)


Yeah, I think almost any ability to produce an effect other than printing to stdout and stderr should probably be restricted by default, with a flag to enable it. If people adopt this and frequently use specific capabilities, they can add a shell alias to add enable their preferred set of capabilities.


Or, one possible alternative would be to always prevent any side-effects other than printing output, and then display a warning saying something like "This command tried to access the network/modify the filesystem/be naughty in some other way. To allow this and re-run the command, press Control+Shift+Enter"


capabilities should certainly work. for portability, setting a non-existant proxy should also help. most commandline tools should honor shell proxy variables and fail the request.

greetings, eMBee.


I think from a design perspective, it's better to build your sanity checks and safeguards into the UI instead of some fancy abstraction that's running in the background. Throw in some options to make it user configurable and you have a pretty slick solution that behaves in a way that'll feel familiar to gurus.

I think requiring a keypress by default to run any commands and maybe even throwing up a warning for a list of known dangerous ones (perhaps user configurable) would work well. A really fancy solution would be to do this stuff via some sort of linting.


I don't see how linting could work, I'm afraid. There are obscure options in zip, git, grep, and what else, not to mention full-blown interpreters like awk and perl, that can edit files via a myriad of syntaxes, and then there's the halting problem. Similarly with whitelist/blacklist, I'm not currently really convinced it could help much; take `xargs rm`, vs. some hyphotetical `foobar -rm` command + params (where maybe -r = recursive, -m = monitor?) Keypress, on the other hand, sounds to me like it could be a reasonable default compromise... still with an option to switch to "fully accelerated, fast mowing" mode at any point, if one so desires...


> `foobar -rm` command + params (where maybe -r = recursive, -m = monitor?)

An example of such a foobar with those exact options would be `inotifywait`. :)


Agreed. As another commenter said KISS.


Too fancy in my opinion. Personally, I prefer KISS solutions. Having an immutable filesystem in up would be unexpected, and I'm sure there's other destructive things that can be done that don't involve the filesystem. As odd as it may seem, I may also want to be able to do filesystem destructive things in up, why shouldn't I be able too?

EDIT: As an example, I may exploratively be playing with find options to match different sets of files in a directory hierarchy. Once I see I've matched the files I want, I may want to add a `-delete` to the end of the find conditions I specified to delete them all. That seems like a useful use of up, but making the filesystem immutable would disallow it.


Hm. I think in my ideal vision, you'd then quit up, and somehow magically have the pipeline ready at your fingertips on the shell prompt, so that you can just add the `-delete` there in a "regular" fashion, and press Enter. Not quite sure how this could be done with bash easily. I think the upN.sh file is not a bad approximation however. (Someone on reddit (I think) seemed to mention that zsh seems to have some functionality which could allow building something like that over up.) Interestingly, this seems to me to support the idea that limiting syscalls/capabilities could be a good approach (if at all possible, that is). Potentially maybe also cutting off network access? But even if I add some safety options, even if I make them the defaults, I'll try really hard to keep an option available for people who want to play with it raw, with no safety belts.

edit: I totally get you as far as KISS goes; that's why I built and shared the tool as is in the first place ;)


> Interestingly, this seems to me to support the idea that limiting syscalls/capabilities could be a good approach (if at all possible, that is). Potentially maybe also cutting off network access?

Yeah, I still think it's not a good approach. rcthompson gave an example with network access, but that doesn't mean that bad things can only happen with filesystems or network access. You can also kill processes, shutdown the computer, and many more things. I don't think you can guess what mistakes the user might do, or which were really mistakes or were intentional.

Also, if you can avoid special permissions or capabilities for your core functionality, then that's better. People should be conservative in giving special permissions to programs, and I can't think it makes sense to give "up" the ability to make a filesystem immutable even if it's in an isolated namespace.


Hmmm; so what would you think of the other popular suggestion, to add a (probably default) mode, where only after pressing, say, Ctrl-Enter, the command would be executed? With another shortcut allowing to switch back and forth to the "no seatbelts, confident, fast moving, fast mowing hacker" mode?


I would just do the Enter thing (I can't think of a reason to prefer Ctrl-Enter), and forget about the other mode. If you really want to do that other mode with a super-restricted environment, just have the code prepared to be denied permission (to setup that environment) and not assume that it has been granted. There will be users that won't feel comfortable adding those permissions/capabilities to "up".


you are building up a pipeline, you don't want it to be destructive until you are done.

so at the very end you could apply a special action that now runs the resulting command with safety disabled.

or toggle between safe and unsafe mode at will, with safe mode being default.

it's not necessary to have the whole up utility run with filesystem disabled. but it could apply the restriction only to the pipeline its executing. that would allow it to selectively apply the restriction as requested by the user.

greetings, eMBee.


Another solution is to only allow commands from a whitelist to be run. If the user types something not on the list you say "rm is not on the list of allowed commands, press enter to run and add it".


++

And rather a blacklist, since a blacklist would be much smaller than a potentially an infinite amount of programs to manage for a whitelist.


I think the whitelist makes more sense, since you can never say for sure that you've added every potentially harmful program to the blacklist. If it's easy to add things to the whitelist, as the parent comment proposed, I think the idea would work very well, especially considering most people have a relatively small number of commands they pipe other commands into.

Btw, if you want to see all the commands you've ever piped data into, sorted by frequency, you can use this command (which I put together using `up`!)

    history | grep -o '| \w\w*' | sort | uniq -c | sort -n | nl


Another thing you might want to investigate (for Linux, not sure about cross-platform alternatives) is OverlayFS[1], which allows you to create union filesystems where some layers are read-only and the top layer is read-write. This can allow destructive operations like rm without actually removing anything from the lower (read-only) layers, using whiteouts.

I'd have replied to this on lobste.rs but don't have an account. I'm not a heavy poster but I'd love an invite if anyone has one going :)

1 - https://www.kernel.org/doc/Documentation/filesystems/overlay...


I need your email for an invitation; if you can send me your address via my own email (see my profile), or via keybase, I'll send you an invite. I don't see your email advertised in your profile here on HN, so my hands are tied for now :)


Thank you very much! I hadn't realised my email address isn't visible here. Maybe I need to move it into the "about" section.

In case you aren't aware, your email address isn't visible in your profile either. I've emailed the GMail address that I found on your website :)


Eshell can be setup to do this. Is neat for small commands. I turned it off, though.


Can you elaborate a bit more? Which functionality are you referring to? And why did you disable it? Really curious! (Author of up here) I get it that by eshell you mean a shell in Emacs? (That's at least what I'm getting as a first result from google)


Indeed, I meant the emacs shell. https://www.masteringemacs.org/article/complete-guide-master... has a good overview in the section regarding the Plan 9 Smart Shell.

I ultimately just didn't find I used it much. If i am iterating, org-mode with shell sources fits a bit more natural for me.

Edit: and to be clear, I meant specifically to have the output below the command with the command still in edit mode. Was not trying to say it is the same as up.


Well, that's only completely horrific. Cool idea, but should definitely be limited to a fixed list of commands or something. Even then, some stuff that might be handy like xargs could be quite dangerous...


> But you'd be careful writing "rm" anywhere in Linux anyway, no? Also, why would you want to pipe something into "rm"?

Not exactly a pipe into `rm`, but `foo | xargs rm` is a common enough pattern.


It’s also dangerous if your filenames have spaces in them.


I was thinking about similar tool to this one and this was the biggest obstacle I could think of. Also write touch and create a file for each keystroke.

To become something more than a hack you have a few choices:

- a blacklist - will never be enough, still arguments may be problematic

- a whitelist - will always be not enough, arguments may be problematic, but a bit less than with a blacklist

- limiting permissions somehow - tricky

The last and best option IMHO would be to wrap it in a sandbox, where all filesystem access is behind an overlay (i.e. mount namespaces and overlayfs). This way if you are satisfied you can apply changes (if there is anything to change). The overlay would be removed and created on each keypress. It may be also possible to wrap process access, so one could safely play with kill, but I'm not sure. Even network settings to some extent. But there is nothing you can do with curl -X POST or something similar.


Hmm, is there a straightforward way to be a bit more careful without detracting from the usefulness? What about a default blacklist of commands with the ability to override it through a config file?


I don't think a blacklist could possibly be comprehensive enough. I think you'd have to use some OS permission-limiting system to prevent it and any subprocesses it spawns from have any write access to the filesystem.


I think the tool would be much more useful with a whitelist. Do this only for grep, awk, sed and other similar tools.

Of course, much more thought is needed to try something like this. Somebody could as well use awk with its system command to do whatever..


Even an incomplete blacklist might be helpful, just in the interest of keeping perfect from being the enemy of good.


Author here: I hope something like that (syscall/capabilities limits) could work. If it is possible, it would just mostly solve the problem, I believe. I'm kinda starting to realize, that probably any command modifying some external state is potentially somewhat risky already, by potentially spinning some exponential feedback loop. (One person on lobste.rs mentioned that foo.bak.bak.bak.bak files could easily get created.) Regardless, I'm generally considering adding a shortcut/option to pause/unpause, and only execute on Ctrl-Enter when paused.


Another possibly reasonable option would be to create a (configurable) whitelist of commands that are considered safe, and keep running the pipeline automatically as long as it only contains whitelisted commands. Any time a non-whitelisted command is introduced, stop auto-running and require Ctrl+Enter or something, until the command once again consists of only whitelisted commands.

This would save you, for example, if you had a custom command called "gr" which was short for "get rid of current directory" (obviously chosen as a pathological example since it's a prefix of grep). As you type the word "grep", auto-running is paused because "g", "gr", and "gre" are not whitelisted, and then once "grep" is fully typed, it recognizes that "grep" is on the whitelist and resumes auto-running. And it never ran the dangerous "gr" command.


Or perhaps use filesystem snapshots as an undo option ... if your fs supports them, of course.


Pretty much. Is rm blacklisted? OK. How about bash -c "rm"? cp? mv? vim?

...? :D


Yah, a whitelist of commands which includes bash would probably be best. You'd be fine using it and can simply switch to chainsaw mode by adding bash to the command


If it could be activated only by hotkey, that would be helpful.


Yes, maybe similar to the way tab completion works.


I hate to jump straight to Docker, but that seems to be a quick way to restrict access to the local file system. This of course limits utility, but would be much safer. Plus I think the usefulness of a tool like up is primarily in munging the input text anyway.


Docker is an obviously bad solution to this. If you can run a docker file you defacto have root on that computer ^1.

Firejail could do exactly the same thing, but without requiring the user running it download an entire second operating system, or requiring them to have root. Also, the sandboxing mechanisms that docker uses are just generally available and aren't hard to use, so if they went that way they may as well just use the actual syscalls that do what they want instead of importing and entire other operating system to run your commands.

This is where my rant about docker, and the habits it encourages, would go. If I could figure out a way to phrase it politely.

1: https://github.com/moby/moby/issues/9976


Wow, firejail seems super interesting, thanks a lot for the idea and mention! I'm not sure if I'll manage to use it, but certainly a good direction for some further research!

https://firejail.wordpress.com/


Docker containers don't need to be (and often aren't) "entire operating systems." Good point about it requiring root, though.


The problem I was suggesting could be solved Docker wasn't with the privileges of up itself, but the problem of commands you write within up being potentially destructive. I didn't say I thought Docker was a good solution.


You can use unshare to create a read-only view of the file system, without going all the way to containers.[0]

[0]: https://gist.github.com/cocagne/4088467


This should be the default mode, and you could activate it with a --rw flag


Thanks a lot for the link! I'll totally try to look into this. If it really proves to be as easy as a single syscall... Oh, wow, now that'd be a killer...


Think a whitelist would be much more appropriate for avoiding potentially harmful commands.


How about not evaluating input until user hits a key, e.g. tab? Then any blame for rm lies squarely with the user.


That's how bash works by default. This tool would be of no use if that's how it worked.


Bash does the autocompletion, but not evaluation and display of the output, does it not?


He was referring to “hits a key” —> [enter]


A potential solution is to create a user who has no or limited write access, and modify up so that it always switches to that user.

I wonder if there’s a way to do this without requiring the creation of a new system user. Some way to revoke all write access for the current process.


Capabilities do this. It's the same mechanic used by containers to restrict their access. See man capabilities(7)


Seems to be linux only at first blush.


I'm ok with focusing on Linux as the prime target for up. Though I'm totally trying to think about cross-platform approaches too, obviously.


FreeBSD has a capabilities system called “Capsicum”.

https://www.freebsd.org/cgi/man.cgi?capsicum(4)

https://wiki.freebsd.org/Capsicum

https://www.cl.cam.ac.uk/research/security/capsicum/freebsd....

Capsicum is convoluted though.

OpenBSD has pledge and unveil, which from what I have seen are very elegant.

https://man.openbsd.org/pledge.2

https://man.openbsd.org/unveil


It does occur to me that if this did system-call redirection and banned the unlink (and maybe rename?) syscalls from working in it's executed commands, you could get a fair degree of safety.


Is it possible to do system call redirection??


I don't know of a way to redirect syscalls, but they can be limited.

https://www.kernel.org/doc/Documentation/prctl/seccomp_filte...


I think most of the useful commands ought to be able to be run in a namespace where they can’t do much. Eg if they can’t see any files then they can’t delete them. Unfortunately they wouldn’t then be able to read config files. Eg I would expect grep looks for a config file to decide what colour to highlight matches and so on.

Perhaps one could run them as a fresh user with few permissions, except it could still write to files if they are writable by “other”


Is it possible to configure namespaces so that they allow read-only access?


i'd be ok with manually configuring up to tell it to copy certain config files to the sandbox so they are available for the commands to run there.

there should not be to many commands that need configuring.

greetings, eMBee.


A solution is to have the tool run first in a container or sandbox where any changes can be rolled back ("test mode").

Once the command looks promising on the test container, run the command again in a fresh container to confirm, and maybe see a list of affected files. Finally run on the main filesystem outside of any test container.

Edit: similar ideas with family snapshots already suggested elsewhere in the thread


Maybe a whitelist of commands it can run?


A whitelist would make it far less useful. A blacklist would still make it possible to execute dangerous commands.

The only solution that I would find acceptable would be not to execute incomplete commands as they're typed.


One could sandbox the whole thing with an tmpfs+overlayfs to avoid such accidents.


One possible way to deal with some of those cases would be to intercept those commands and instead display something about what would happen e.g. `deletes 42 files`


I feel like you could replace this tool with a hotkey that runs the current command without clearing the line, sort of ”what if I ran this as is”


Wouldn't it be possible to limit that by cgroups/namespaces? Possibly remounting everything as read-only.


Yes this is an amazingly dangerous and stupid idea. Sure it looks cool but so does a Lamborghini until it catches fire or you crash it into a tree at 150mph.

Giving a user an opportunity to correct mistakes before they kill themselves is a major feature of Unix. Sure when you’ve decided you know what you’re talking about but don’t it will quite obediently shoot your face off. We don’t want to make that bit easier though.

Edit: please don’t take this as derision but more factual. It certainly looks cool and opens the idea for further discussion.


>Giving a user an opportunity to correct mistakes before they kill themselves is a major feature of Unix.

That's the complete opposite philosphy of Unix. That's, in fact, MIT/LISP philosophy.

Unix' Motto is DIE fast.


Actually it’s more RTFM before you drive the car. A powerful skill many have forgotten which is supplanted by autonomous tools to make tripping over claymores easier.


Dangerous, yes. Stupid, though? No. It’s a useful tool and a good idea that merely needs further refinement.


Stupid is too harsh, and I don't want to discourage anyone from contributing to open source, but I'm not sure this is all that useful either (at least not to me). Is the utility in not having to press Enter because this'll do it for me between every character? I can only think that the reason people like it so much is because it looks cool or most people don't know how to use command line editing keybindings effectively and like that they won't have to hit up arrow and move slowly across the command to edit. The fact that it doesn't have readline support, timestamped history support, or output the output of the last command to stdout at the end already seem like feature losses when compared to using a shell normally.


At least one neat benefit I can imagine is not flooding your history with 20 minor edits to a regex in a long pipeline, and not flooding your stdout with garbage from a bad regex

Inversion of the output (viewing the top, instead of the bottom, without scrolling on longer outputs) is also of note

Imagine if this were simply the shell, with all its normal editing facilities, but laid out in this fashion. It seems obvious to me theres a usefulness anytime you’re trying to create a longer pipeline.. and doing the same thing the shell normally, but leaving a mess behind

Your list of issues are all resolvable (and probably trivially so), and thus not really relevant to the question of whether this thing is worth having in the first place (which requires more fundamental questioning). IE the issue with rm executing when you’re typing in ‘rmore’ is absolutely fatal in the current design and makes it unusable. That can’t be fixed without a bunch of hacks, or losing one of its main features (swapping to enter-to-run is more than fine imo). With such a fatal flaw, it could be fair to call it stupid and useless.

But for lacking readline support on first public listing..?


I couldn't find a good readline library in pure Go, that would also interact well with the tcell package I am using for the TUI. This annoyed me really hard, but eventually I said, screw that, and went with the minimum I had to write to make an input box usable at all :)


I disagree. The example in the GIF is something I've done many times, usually with a bunch of temporary files. Where the commands are side-effect free and relatively cheap, but the specifics are fiddly, this looks to be really useful. But not universally applicable, no!


I think at the point something like this becomes useful though, I've usually just dropped into writing a Python script - which more often then not is the right choice because there's a high probability I'm going to need to do more things soon anyway if it's getting that complicated.


Exactly


After watching the GIF on the README.md I was like "WHERE HAVE YOU BEEN ALL MY LIFE" (actually just the past 10'ish years)!!!

Then I was like, I bet quite a few ancient wizards out there have some pretty amazing techniques they have picked up over the year. And that it would be great to be a "fly on the wall" watching some of these wizards do simple tasks. And that a YouTube channel watching them do simple tasks could be quite revealing and therefore very educational.

Paul Irish did something like this a few years ago and I still use a bunch of the tools he introduced into my purview. Really, really improved productivity!


Not a series, but a great one-off video by Gary Bernhardt, A Raw View into my Unix Hackery, linked below. I haven't bought his courses, but based on what I've seen of his talks and free material, they may get you part of the way (but I don't think all of the way) to what you're describing.

If anyone has taken his courses, I'm sure they could give better feedback on those. Either way, watch the video.

https://vimeo.com/11202537


> And that a YouTube channel watching them do simple tasks could be quite revealing and therefore very educational.

Does such a channel exist? All I've ever found has been tutorials. I want to watch unix veterans use the tools. I've found that to be the best way to learn. Hell, that's how I learned vim.


which channel did you use to learn vim?



Nicely done!

I do a pretty similar version of this with command-| in textmate, but I've wished for/thought about building a command like this to speed up the process!

The textmate approach does have the advantage (or perhaps disadvantage, for large enough inputs!) of emitting immutable "views" of the data at each step, rather than (presumably?) re-evaluating the pipeline each time, which is nice if the steps of the pipeline are the slow part, or to go back 2 steps and start a new pipeline. Maybe a potential future flag? :-)

Automatically emitting an `up<N>.sh` script is super clever too!


Can you please elaborate a bit on the textmate approach? I don't fully grasp it from this description yet... while it totally sounds interesting...


I (partially) recreated your `seq 1 15` example here [1] -- no sleep, but my lshw wasn't as interesting as yours :-) I'm hitting command-shift-| to bring up the "filter through command" dialog.

[1]: https://tcj.io/i/textmate_piping.gif


Ok, I get it now. Thanks! :)


i have not used textmate, but the description let me come up with this idea. maybe it's useful:

consider the command

A | B | C | D

when you process it in up, split the command at the | and process each one separately. that is first run

A;

save the output into a buffer (BA)

then pipe BA into B, save that into BB, and so on, until you have 4 buffers containing the outputs of A; A|B; A|B|C; and A|B|C|D;

now when the user edits, you can see which section is changed. say the user changes the command C into C1, so the pipeline becomes: A|B|C1|D

at this point you can pipe buffer BB (which has the output from A|B) into C1; write that into a new buffer BC1; and pipe that into D;

the old buffers C and D can be removed at this point, or you can keep them around in case the user undoes their change and goes back to an old version.

greetings, eMBee.


I've submitted an issue on GitHub, title "Executing incomplete commands is too dangerous".

https://github.com/akavel/up/issues/8


Really cool. The key principle that this tool highlights is the advantage of rapid iteration. Whenever you have a problem, spending the time up front to shorten your test iterations will pay off massively in the long run of the problem's life.


This is nice for linear pipes. But occasionally I find myself building more of a graph.

Bash lets you provide file arguments from pipes via the <() syntax (it's turned into a fd reference via proc behind the scenes). And you can also wire up additional pipes with fd renumbering, though this gets ugly in bash, some other shells are more flexible there, e.g dgsh.

A tool to iteratively build a graph of commands and pipes would be the next step.


This sounds interesting. Can you pull up some history and make a gist/pastie?


That's totally where my long-term vision goes! :D But I'm very unsure if I'll have enough time and resources to be able to reach it :/ My strong hope is however that the Luna language[1] & team may get us there faster, and to an even nicer place actually :)

[1]: https://luna-lang.org


I checked out the original tool Pipecut—it allows you to look at the output after each pipe:

https://youtu.be/CZHEZHK4jRc?t=1273

When I see stuff like this, I check the date to see if it happened before or after "Inventing on Principle" was presented in 2012. (It's always after)


Hmmm, you made me think and try to recollect. And to my surprise, I can't really remember the fact of The Presentation being an influence. But I am quite sure it must have been, I can see no other option. Hm, memory is a tricky thing. And Bret Victor left his imprint on our whole industry.


Related: percol utility allows to narrow a process output interactively https://github.com/mooz/percol

I use it all the time to select a command from my shell history. It is a very convenient addition to zsh-autosuggestions (one of those things that you use hundreds times per day without even thinking about it).


Late to the party. This screams like it needs to be part of the shell instead of an addition.

E. G. I type a command in the shell, end it with pipe, and press enter, it doesn't delete the command but runs a preview of the output top 10 lines? This way I can continue to the edit the command without losing the context, the pipe, or I can decide to redirect it to a file!


Hmm, interesting thought! The tool needs to do some ugly buffering however; would there be place for it in a shell? Also, I suppose some kind of "Ctrl-Enter" or something would be a better idea instead? So that it could also be hit at any point in the line... But how about scrolling through the preview?


I've been considering how I might implement this as part of "Pipe old commands without rerunning" from my project's README. https://github.com/nixpulvis/oursh#features


This is great. Thanks for sharing.

For something a lot dumber: here's a tiny thing to print out pipe contents to allow for cancelation before continuing the pipe.

https://gist.github.com/jasisk/be34ae93b74a3d3e8f0a4daac1237...


The most frequent use case for this for me would be interactive grep. The "fzf" tools has this along with fuzzy search. And the text doesn't disappear while you type "grep" like it does with "up".

I will definitely try this out too though when I make a pipeline with sed, cut, etc.


You know that less has interactive grep right? Type & phrase <enter> To go back: & <enter>


Just an awesome tool, I love those moments of "ah that's trivial, why didn't I think of this already" and it definitively applies for this tool. So special kudos for the idea, but also how little go code the implementation took (ignoring dependencies).

There is already some "pre-release" discussion going on over at lobsters btw.

https://lobste.rs/s/acpz00/up_tool_for_writing_linux_pipes_w...


Thanks! :) See also my "prior art" note in the readme ;) I also had the "why didn't somebody think of this already" moment when I thought of it ;) And somebody in fact more or less did, just I wasn't able find it then :)


Reminds me of fzz [0], which is a similar tool but focused more on interactive modification of prescribed components of a known command/pipeline than appending a pipeline.

[0] https://github.com/mrnugget/fzz


Novel and interesting idea but honestly i can't see any other use for this than grep and that point you might as well just pipe the output straight into vi, less or your other favourite editor where incremental search is one keystroke away.

Sure, the shell has some powerful tools like jq that editors usually lack but otherwise several levels combo piping through cut, wc, awk, etc are quite rare and are usually only done in crazy shell scripts that actually should be rewritten in more appropriate language.

And the dangers of automatically running any commands shouldn't be downplayed. If you only whitelist a few programs this would be much safer, most users would only use this for grep. Or run the whole thing through a docker container.


rm is not the only issue. Another problem may arise if your command is not idempotent with respect to the state of the system. Especially if it is not idempotent with respect to system state and requires multiple parameters, since it will execute many times as the parameters are typed. Maybe having a configurable debounce on key press with a reasonable default would be sufficient to get around this issue.

What would be really cool is if this tool ran in a pretend mode and showed you what the results would be without actually mutating the system. Then, once the user is happy with the results, the command can be executed for real.

This whole concept reminds me of helm for emacs :-). Thanks for sharing!


Thanks for the ideas, and for the good words! :) Do you know of any nice screencast/video where some similar functionality of helm would be shown? Does it also work for shell commands in some way? I'm not an emacs user...


> This is achieved by boosting any typical Linux text-processing utils such as grep, sort, cut, paste, awk, wc, perl, etc., etc., by providing a quick, interactive, scrollable preview of their results.

    lshw | grep network -A2 | grep : | cut -d: -f2-
A better idea would be to work on tools that have structured output, so people can select the keys they want and don't have to scrape.

    $ Get-NetAdapter | where status -eq 'up' | select InterfaceDescription

    InterfaceDescription
    --------------------
    Qualcomm Atheros QCA61x4A Wireless Network Adapter


Sounds like fun, but I think there are other ways to achieve a similar workflow without an extra tool. For example, I like to use a tmux session with two panes. On the one side there is a `vim myscript.sh` and on the other side there is a `watch myscript.sh`.

Very similar result and using rm is not that dangerous as you can finish writing your code and it gets executed as soon as you save the file (if you don't like the watch delay you can use one of the many inotify based watch tools or set -n 0.1 or something that suits your needs).


I use `entr` for a similar effect, little nicer than watch, though I don't think it's available on all OSes.


Well damn! This is a really elegant solution for a problem that has been bothering me for quite a while, and for which my own planned solution was much more complex :O


Thanks a lot for the good words! :D <3


Why wasn't there a tool for this earlier? I am a very aggressive piper but I used to hit up arrow , a series of ctrl-w and enter and wait everytime.

Thank you


One of my favorite quotes:

``Remember, a temp file is just a pipe with an attitude and a strong will to live'' - Randal Schwartz [comp.unix.questions]


One thing that probably would be useful is when commands are not completely typed or parametrized at the end of the pipeline to still show the last result along with the error message instead of replacing everything and thus removing any context the user had.


* https://github.com/akavel/up/blob/master/up.go#L574

Please at least respect the SHELL environment variable. (-:


Eheh; sorry, MVP! ;) and thanks a lot for taking it lightly and commenting in such a kind and playful way :D It's totally on the long list in some way... please see https://github.com/akavel/up/issues/2 ! :)


There is a vim plugin that enables "live preview" for python : https://github.com/metakirby5/codi.vim


What about using LD_PRELOAD to load a FS shim that prevents writing to the FS?


I don't think LD_PRELOAD would be enough; there are programs which don't use libc and reach straight for syscalls (e.g. the whole Go ecosystem/runtime)


Sure; it wouldn’t be perfect safety but would work for a large swath of Unix tools.


What about making the tool behave sanely so you don't have to play tricks with the OS to protect yourself from its behavior?

If I type "grep", I don't want it to try to execute "g", "gr", and "gre" as I type it -- not if those commands don't exist, and especially not if they do.

Just don't execute a command unless the user requests it (say, by typing Enter). Problem solved.


It would remove the live execution bits which are kinda neet. I think the cli has a lot of life left in it and ideas like this are worth exploring. Perhaps compromises are necessary but I would love a tool that pretends to execute something shows the results and gives the option of committing those changes. Think of a find | xargs grep | rm That then shows the output but prompts the user for commit.


Maybe if it automatically chrooted everything to prevent side affects (or better yet: isolate them to a ramdisk.) That would be awesome.


Massive bravo. Thought about the need for this for ages.


For mac: go get -u github.com/akavel/up


Wow, how reckless! running partial commandline constructions on every keystroke..

The obvious thing to do is only run what's been typed so far on a trigger key like tab or something.


this looks great! installing! my one gripe is that the name really should have been marIO ;)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: