Ever wanted to know the progress of a long running operation? Copying a file, importing a mysql db, etc. Pipe Viewer (pv) is what you need: http://www.ivarch.com/programs/pv.shtml.
It lets you monitor the progress of any piped command and gives you time elapsed, speed, time left, and a progress bar (wget style).
Pipe Viewer dramatically increased my productivity for large scale data processing. In particular, it lets you quickly know whether something will take 5 minutes, or 2 hours, so you can plan accordingly. It's painful watching people try to do this without pv.
It provides a great suite of tools for interacting with S3, and is best used on an EC2 instance you are connected to via SSH. It is also ridiculously fast, much faster than trying to interact with S3 from a local FTP browser, or even from Amazon's own S3 dashboard. For example on my computer using Amazon's own web facing dashboard it will take about 30-45 minutes to make 14000 files on S3 public, versus via the command line tool downloaded and running on one of my EC2 instances it can make those files public within minutes.
I assume this is because it is local network traffic. Anyway, if you are ever in a bind and need to move a bunch of files to S3, I highly recommend S3 Tools. It has saved me many times.
Along those lines wget is the most powerful command line tool I've ever used, with so much capability. It is simply incredible when combined with S3 tools, allowing you to easily grab gigabytes of images off of a personal server or staging location and upload them to S3 very quickly.
And if you need to do more than move files around you can manage even more aspects of AWS including EC2 instances from the command line using this powerful command line tool:
Good point. I guess in the past I have spun up a micro instance to run it, so I didn't have to run my S3 operations using one of my main production servers, and hence I didn't run into any problems with its unthrottled speed.
I prefer curl'ing icanhazip.com, and I make it a quick alias in my .bashrc:
alias myip="curl icanhazip.com"
Apachebench (ab) is a decent alternative to siege, and knowing how to use tcpdump and netcat comes in handy for debugging. Other than that, my new favorite command line tool over the past several months has to be vagrant, which lets you script and streamline VM creation and builds from the command line. If I need to completely reproduce my production environment on a test box, it's my utility of choice.
My most used cli tool outside of the default nuts and bolts is dtrx, the best and easiest file extractor for *nix. No more fiddling with flags or looking up, handles issues with putting lots of things in different directories or the wrong permissions on files. It has saved me a ton of time over the years.
Wow! Just wow! Bookmarked! A mere upvote is not enough for this! Thank you!!!
Questions:
- Is there a site pointing out amazing command line tools like this? (their existence as opposed to usage examples like at http://www.commandlinefu.com/)
- Is there a site listing the OS X equivalents for Linux command line tools? (e.g. what's the OS X equivalent for dos2unix / flip?)
Yep, check out http://onethingwell.org which is exactly that. Software (incl cli) that do one thing well, as according to the UNIX philosophy. It's nicely tagged as well, so you can click the 'osx' tag and get tools just for OS X and so forth.
I actually found dtrx on hacker news though a few years ago, but it's also on onethingwell I believe.
lftp is a practical SFTP, FTPS and FTP transfer program, including automatic upload/download resumption and synchronization (mirror) mode. Good for both interactive use and scripting.
curl -I and wget -S are particularly helpful when debugging redirects.
Sometimes I migrate URL schemes, and set up permanent redirects in my .htaccess files. Testing them in a browser is a real pain, because browsers cache the redirect (which is the point of having a permanent redirect), so even if you change the .htaccess, you still get the old response. And pressing the refresh button is no help, because that reloads the destination page, not the source of the redirect.
That's when a non-caching command line client saves your day.
Screen is a bit like a window manager for consoles. I use it to start multiple (software) servers without daemon mode. I can then switch between their outputs with screen.
Screen also detaches the console from your ssh session. In my case this means that the servers keep running if I loose my ssh connection to the (hardware) server.
It's a very handy tool and definitely belongs in the articles list of tools.
(Sort of) in the same vein, I've recently started using xmonad as my window manager and so far it's a lot more comfortable than the Ubuntu default. You may need to learn a teeny but of Haskell to get up and running, but so far I've been OK copy-pasting from sample configs and muddling through.
When I bought a new Macbook and wasn't able to run linux, XMonad was really the only thing I missed. No matter how much Apple focuses on it's interface and making it easy to use, it's still incredibly slow and unintuitive to me.
I hate that the default escape key sequence clobbers over Ctrl-A though, so the first thing I always have to do when I log into a new server or account is this:
$ echo 'escape ^uU' > ~/.screenrc
Or I quickly start tearing my hair out and screaming profanities every time I try to do something.
screen is one of those things that's been on my todo list for way too long, along with its alternatives(?), tmux and byobu. Anyone that uses/used all three and can offer a comparison?
byobu is something like a theme or packaged config for screen, so it doesn't really need to be treated separately.
The biggest difference between tmux and screen is that tmux is a lot more flexible with regard to laying out groups of sub-terminals within the main terminal. screen is mostly limited to one-at-a-time, horizontal splitting or (in very recent versions) vertical splitting, while tmux lets you go nuts: http://tmux.sourceforge.net/tmux3.png
Also, GNU screen is very old and stable, while tmux is (so far) still new and flexible. For example, tmux very quickly added support for handling Unicode characters beyond U+FFFF, a feat that (so far as I know) screen still can't manage. That's only one example, but I'm sure there'll be more as time goes on.
EDIT: One other thing that tmux does that makes it better than screen: when I start up a tmux session from within an X11 session, tmux clears the $DISPLAY variable so that processes running inside tmux don't try to connect to the original X server - which may very well have gone away at that point. It's a small thing, but incredibly annoying when it happens.
I switched from screen to tmux a while back. The key thing for me is that configuration and manipulations/operations in general are more user-friendly. Otherwise, in terms of the basic use cases, they are essentially the same.
I have used screen for many years and decided to finally give tmux a try about 2 months ago. I really enjoy it. It took some adjusting, but overall I like it better.
Things like this are largely due to personal preferences, though. All I can tell you is that I am happier with tmux than I was with screen. It is more modern, and the split screening capabilities are better.
I switched from screen to tmux about three months ago. It's easier to configure, does vertical splits, and is under more active development (apparently).
It's been mentioned on Hacker News a few times before, but my project PageKite (and showoff.io and localtunnel) is designed to help out with quick web demos and collaboration.
... answer a few questions and whatever is running on port 80 will be visible as https://SOMENAME.pagekite.me/ within moments, almost no matter what kind of network connection you have. :-)
There are also .deb and .rpm packages available for heavier users.
ngrep was news to me. I think I've heard of it once before, recently, but didn't really realize what it was. Having not tried it, it sounds like a really nice option when something like Wireshark or Ethereal would be overkill or just too much effort to bother with.
I was surprised that he ran siege against www.google.co.uk without some kind of "don't do this" disclaimer to new users. Running it against other people's websites is pretty poor form.
If you run it against a public site, that may be what happens. However it's intended for stressing your own sites in a test environment, in which case you have full control over the config. Setting up your test environment to block your stress tester as a DoS offender would just be silly.
It lets you monitor the progress of any piped command and gives you time elapsed, speed, time left, and a progress bar (wget style).
Copy a file:
Import mysql db: More tricks: http://blog.urfix.com/9-tricks-pv-pipe-viewer/edit: To install on OS X just do