I was tempted to poke a little fun at you for having trouble with sed -- among my peers, 's/foo/bar/g' used pretty much daily, even just in informal emails/conversations.
But I'm glad I looked at the list first, because I found this wonder: `lstopo --of txt`. I'm going to use this in class next week, I can't believe I've never seen that before.
My biggest hardship with sed is having to figure out how to escape the operators within regular expressions on the command line. I think it's pretty reasonable to spend 15+ minutes on a complicated replacement regex.
Edit: It looks like I agree with you; `replace` only solves trivial problems that can easily be done with `sed`.
You can use single quotes to escape everything except single quotes from the shell. Then use backslashes to escape single quotes where needed.
If you haven't taken the time, I'd say it's well worth learning exactly how the shell escapes work. It's surprisingly simple and natural once you get used to it.
Thanks for the advice; I should block some time to fully grok shell escapes.
Even with single quotes, sed still requires escaping parentheses, + operators, and other operators that would otherwise be interpreted literally (but not all operators should be escaped). In the languages that I learned regexes this wasn't required.
Oh, the basic syntax is fine. I'm a Vim user ;) And even before that, s///g was also part of my slang.
My issue is remembering how to invoke the command - whether it edits in place by default, what the order of arguments is, etc. To be fair, that's not necessarily that much easier with a dedicated replace command either.
Not covered in the article, but `rename` is a multi-file renamer. I wrote a script which used an `echo $f | sed | mv` loop before I learned about this.
Be careful there's at least two implementations of rename in different Linux distros. I've run into one made in perl that uses regex in Debian (e.g. `rename 's/foo/bar/' * `). And another one that uses simple strings (`rename foo bar * `) in CentOS.
This name clash makes that the perl module cannot be easily used on CentOS, since the CentOS utility is required for building RPM packages.
You'd have to edit Makefile.PL of the Perl module before installation, to avoid that clash. (I'd prefer to keep the name as "rename.pl", in this case; though file-rename, as on Windows, would work too.)
Reminds me of my sysadmin days when I was trying to learn shell scripting and would lookup man pages of random binaries in /usr/bin. Before we could bring down a box for nightly maintenance/backup, we would send a shutdown broadcast notice like:
banner "Shutdown"|wall; banner "in 5 mins"|wall
I found out "write" command and setup a hacky little one-to-one chat script - it was terrible and geeky but I really liked it.
I'm probably forgetting some more obscure commands that I enjoyed :)
PS: forgot to mention that when I told my boss about the "write" based scripts, he gloated that he had setup a quiz program which supported multiple participants using named pipes. Awesome guy :)
ISTR it was Æleen Frisch (author of Essential System Administration) who recommended that all nix admins take one day a year to read through the entire manual sections (1) and (8), which I implemented thus:
$ cd /usr/bin ; man
(and repeat for other bindirs)
You will be amazed at what you've forgotten, and what your system can do.
I was thinking about something along this:
get a random number (say dd'ing from /dev/urandom count=1 bs=2 or using shuf), then get the modulo M with number of entries in /usr/bin, then find the Mth entry of the ls, and print the man page. A file containing titles of already printed man pages can be used to exclude doubles.
Actually, I could not find anything relevant by searching it for "file" and "next". I guess it's purely the responsibility of the pager, which presumably is not part of man itself.
After reading this I'm reminded that if an IDE just supported replacing highlighted text after it was piped through a command it would have a lot going for it. A lot of these are features some that IDEs have been lacking for years yet they've been hiding away in /bin.
While editing binary files in Vim, one needs to be careful about setting the binary option before editing a binary file or opening the binary file with `-b` option.
For example,
printf "\x00\x01\x02\x03" > a.bin
vim a.bin
:%!xxd -g1
would display
0000000: 00 01 02 03 0a .....
(The `printf` and `vim` commands are entered in the shell. The `:%!xxd` command is entered in Vim.)
In the above example, a newline has been added by Vim where none existed in the binary file. In fact, if you then run `%!xxd -r` followed by `:w`, this additional newline would be written back to the file. Here is the right way to preserve the binary data in a binary file.
printf "\x00\x01\x02\x03" > a.bin
vim -b a.bin
:%!xxd -g1
would display
0000000: 00 01 02 03 ....
Only the four bytes in the file appear now. Alternatively, one may enter `:set bin` in Vim before editing a binary file.
In my previous job i wrote an extension for Visual Studio to do exactly that: call a program with the selection, pass the input to its stdin and replace the selection with the output.
Then i wrote a bunch of Python scripts to do things like recalculating command id's for wxWidgets, generating the registration, header and body parts from event definitions (which i cut/pasted later, but it was much faster than doing that by hand), aligning some declarations to fit with the code style, etc.
The extension would also scan a folder inside My Documents and register any executable, python script, bash script and tcl script to a menu in the IDE. Many people in the studio knew Python and some new Bash (i think i was the only one who used Tcl though, i mainly used it to quickly search the filenames for the project and open a file in VS or Vim. I made it to search as i type and since we had a more or less 1-to-1 mapping between objects and files, i could search for object definitions faster than using VS's search functionality - especially when VS was busy building or doing stuff).
MPW - macintosh programmers workshop circa 1988 (when I first used it) had the notion that selections within windows could be treated as just another file. It had a lot of posix-y commands but lacked multitasking and all pipes were done via temp files, alas.
That will show you what files correspond to what file descriptors for your current shell process. You can then experiment with redirects. To see how a pipe works we can do something similar:
Anyway, thinking of `>` and `<` as just FD setters made things a lot clearer for me personally. They just set the file descriptor on the left to file on the right and `&n` can be used to "dereference" a FD to its corresponding file. The only real difference between `>` and `<` is that the former opens the file with write permission while the latter opens it with read. Well, and the default FDs are different.
Incidentally, this makes it clear why the following doesn't work as expected:
$ curl 'https://kernel.org/' >/dev/null 2>&1 | tee curl.errs
Since FDs get set in the order they are specified, 1 gets set to /dev/null and then 2 gets set to whatever 1 is, i.e /dev/null. So it's clear that we probably intended the following:
$ curl 'https://kernel.org/' 2>&1 >/dev/null | tee curl.errs
Which lets us ignore stdin and maybe pipe stdout to some other command.
Now I hope Linus doesn't hate on me for abusing kernel.org :P
Edit: I've done some more testing, and discovered that the above works on zsh, but not in bash
2nd Edit: Ahha! http://www.cs.elte.hu/zsh-manual/zsh_7.html . So this is because zsh w/ the stock config (MULTIOS option enabled) will open as many outputs as you give it. So it can both copy FD 1's contents to FD 2 and to the pipe'd command.
Wow this list is tremendous! It's fun to still discover new commands after all these years.
I actually used `taskset -c` just recently, when I was running lots of sidekiq processes and wanted to make sure they all used different cores. It helped me get full utilization out of the box, whereas before I would frequently see some sidekiqs competing for a core while other cores sat idle.
Be careful with taskset and chrt, assigning realtime priority to processes and pinning on them on CPUs might lead to un-expected behavior, priority inversion or your system locking down (say if you pgrep some kernel tasks/threads in there as well by accident, I've done before).
Thank you for the warning! I can see how that could happen with chrt. I can't work out how it could happen with just taskset. Are you saying to be careful when combining them? That would make sense to me.
Also I can't figure out what pgrap is. Do you mean pgrep? I think you must mean something else, because I don't see what pgrep has to do with priority inversion. But it shares a manpage with pkill. . . .
I don't do real-time work, but I've always been curious to know more about it. And priority inversion seems like an interesting and challenging problem.
> Also I can't figure out what pgrap is. Do you mean pgrep?
Sorry meant pgrep.
> I don't see what pgrep has to do with priority inversion.
Grepping processes doesn't have anything to do with priority inversions and locking your system. But grepping processes then setting them to realtime priority sched_fifo 99 or could wreck havok
> I don't do real-time work, but I've always been curious to know more about it. And priority inversion seems like an interesting and challenging problem.
It can be fun. It is kind of a separate world on its own. Interestingly most of the popular OS-es by default are tuned for throughput not low latency. So configuring and getting everything just right is an interesting challenge (sometimes involving applying kernel patches, although lately that might not be necessary as some have been mainlined).
A lot of really cool stuff here! shameless plug I have a small side-project site that goes through a lot of common activities on the command line https://cmdchallenge.com. Some great ideas that I think I will incorporate.
Going off a comment on the article, using mainly awk to find your commands, I expanded it a little bit, as most commands I use take a first argument that changes how it works. (Aliasing this command might not be a bad idea.)
In early versions, DOS pipes weren't real pipes. It would run the first command and save the output to a temporary file; then once that was finished, run the second command with the file as input. I'm not sure when it gained real pipes.
The eventual solution using SetNamedPipeHandleState isn't really supported: https://msdn.microsoft.com/en-us/library/windows/desktop/aa3... "Note that nonblocking mode is supported for compatibility with Microsoft LAN Manager version 2.0 and should not be used to achieve asynchronous input and output (I/O) with named pipes."
So? The article mentions pipes as a distinctive feature of *nix systems. Neither it nor I claimed who had it first. The truth is piping from devices, programs and files is pretty much the same in Windows :)
In Windows, specifically using |, the pipe is anonymous[0]. Named pipes can be created, but named pipes (fifos) on Windows are substantially different from those on unix. Named pipes are not relevant to the pipes my reference above introduces in Windows 95's DOS system.
fwiw wc -l in macOS Sierra 10.12.3 reports 1059 entries. There are quite a few programs that are Apple specific (macbinary, appletviewer). But many of the ones talked about in the article are here too.
Consider the headlined article, for starters. A simple
ln -s /etc/passwd /tmp/test_at.log
which any unprivileged user can quietly do, before running its examples for at, will ruin the system administrator's day just after 21:26.
This is the basis for many of the problems with files in /tmp : predictable filenames written-to without care by privileged processes. It is a widespread disease that is the reason that the systemd people arrange to run many programs with PrivateTmp=true .
In this particular case, root's own home directory would have made a much better private playground for the test_at.log file.
Can't believe I've been running Linux for 16 years and I didn't know about that one.