Hacker News new | past | comments | ask | show | jobs | submit login

I mean significantly more verbose and more complicated. Fee free to try it yourself - replicate `bash -eo pipefail -c 'mysqldump example_db | gzip > dump.sql.gz'` in Python with streaming (databases can be several GB, so you can't read it all in memory).

I have not tried any of these libraries, though they look nice. The only times I've had to write scripts that make heavy use of subprocesses and pipes, I want them to work with only the standard library so I can just rsync them and they just work™.




You have to dig through the documentation a bit on the website but all the info is there[1]

  from sh import mysqldump
  from sh import gzip

  gzip(mysqldump("example_db", _piped=True), _out="dump.sql.gz")
or if you don't want the magic import thing:

  import sh
  sh.gzip(sh.mysqldump("example_db", _piped=True), _out="dump.sql.gz")
It's definitely foreign to shell scripting languages since the piping syntax is a bit different & you have to remember to do `_piped=True` since it's not parallel by default. But the default behavior is pipefail & exit on the command failing IIRC so it's more of a choose your poison thing (do you want a subtle perf issue or a subtle bug in your script not handling error states correctly). And I find it easier to read. + if you want to customize anything about gzip or not rely on needing the binary in the path, then you can just switch it to Python-native gzip pretty easily.

That's my favorite feature. Conciseness if I'm just translating a script with progressive complexity options to migrate things that need more complexity or different requirements within the same script without having to rewrite it from scratch.

[1] https://amoffat.github.io/sh/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: