Hacker Newsnew | past | comments | ask | show | jobs | submit | rav's commentslogin

You can rewrite it using let-or-else to get rid of the unwrap, which some would find to be more idiomatic.

    let value = Some(param.as_ref()) else {
      // Handle, and continue, return an error etc
    }
    // Use `value` as normal.


Small nit: the Some() pattern should go on the left side of the assignment:

    let Some(value) = param.as_ref() else {
      // Handle, and continue, return an error etc
    }
    // Use `value` as normal.


That part intrigued me about the article: I hadn't heard of that syntax! Will try.


At the moment it seems like you can avoid AI search results by either including swear words, or by using 'minus' to exclude terms (e.g. append -elephant to your search query).


I have done that on occasion, but it's easier to just scroll a bit.


I often run rm -rf as part of operations (for reasons), and my habit is to first run "sudo du -csh" on the paths to be deleted, check that the total size makes sense, and then up-arrow and replace "du -csh" with "rm -rf".


Use trash-cli and additionally git commit the target if you’re nervous before deletion.


My suggestion: Change the color of visited links! Adding a "visited" color for links will make it easier for visitors to see which posts they have already read.


On a small team I usually already know who wrote the code I'm reading, but it's nice to see if a block of code is all from the same point in time, or if some of the lines are the result of later bugfixing. It's also useful to find the associated pull request for a block of code, to see what issues were considered in code review, to know whether something that seems odd was discussed or glossed over when the code was merged in the first place.


I find the GitHub blame view indispensable for this kind of code archeology, as they also give you an easy way to traverse the history of lines of code. In blame, you can go back to the previous revision that changed the line and see the blame for that, and on and on.

I really want to find or build a tool that can automatically traverse history this way, like git-evolve-log.


I've been carrying around a copy of "git blameall" for years - looks like https://github.com/gnddev/git-blameall is the same one - that basically does this, but keeps it all interleaved in one output (which works pretty well for archeology, especially if you're looking at "work hardened" code.)

(Work hardening is a metalworking term where metal bent back and forth (or hammered) too much becomes brittle; an analogous effect shows up in code, where a piece of code that has been bugfixed a couple of times will probably be need more fixes; there was a published result a decade or so back about using this to focus QA efforts...)


"Cregit" tool might be of interest to you, it generates token-based (rather than line-based) git "blame" annotation views: https://github.com/cregit/cregit

Example output based on Linux kernel @ "Cregit-Linux: how code gets into the kernel": https://cregit.linuxsources.org/

I learned of Cregit recently--just submitted it to HN after seeing multiple recent HN comments discussing issues related to line-based "blame" annotation granularity:

"Cregit-Linux: how code gets into the kernel": https://news.ycombinator.com/item?id=43451654


there is https://github.com/emacsmirror/git-timemachine which is really nice if you use emacs.


Ooh, core.hooksPath is quite nifty. I usually use something like

         ln -sf ../../scripts/git-pre-commit-hook .git/hooks/pre-commit
which simply adds a pre-commit symlink to a script in the repo's scripts/ dir. But hooksPath seems better.


I don't understand what a dedicated "completely open source offering" provides or what your "five nines feature flag system" provides. If you're running on a simple system architecture, then you can sync some text files around, and if you have a more scalable distributed architecture, then you're probably already handling some kind of slowly-changing, centrally-managed system state at runtime (e.g. authentication/authorization, or in-app news updates, ...) where you can easily add another slowly-changing, centrally-managed bit of data to be synchronised. How do you measure the nines on a feature flag system, if you're not just listing the nines on your system as a whole?


> If you're running on a simple system architecture,

His point was that even a feature flag system in a complex environment with substantial functional and system requirements is worth building vs buying. If your needs are even simpler, then this statement is even more true!

I'm having a hard time making sense out of the rest of your comment, but in larger businesses the kinds of things you're dealing with are:

- low latency / staleness: You flip a flag, and you'll want to see the results "immediately", across all of the services in all of your datacenters. Think on the order of one second vs, say 60s.

- scalability: Every service in your entire business will want to check many feature flags on every single request. For a naive architecture this would trivially turn into ungodly QPS. Even if you took a simple caching approach (say cache and flush on the staleness window), you could be talking hundreds of thousands of QPS across all of your services. You'll probably want some combination of pull and push. You'll also need the service to be able to opt into the specific sets of flags that it cares about. Some services will need to be more promiscuous and won't know exactly which flags they need to know in advance.

- high availability: You want to use these flags everywhere, including your highest availability services. The best architecture for this is that there's not a hard dependency on a live service.

- supports complex rules: Many flags will have fairly complicated rules requiring local context from the currently executing service call. Something like: "If this customer's preferred language code is ja-JP, and they're using one of the following devices (Samsung Android blah, iPhone blargh), and they're running versions 1.1-1.4 of our app, then disable this feature". You don't want to duplicate this logic in every individual service, and you don't want to make an outgoing service call (remember, H/A), so you'll be shipping these rules down to the microservices, and you'll need a rules engine that they can execute locally.

- supports per-customer overrides: You'll often want to manually flip flags for specific customers regardless of the rules you have in place. These exclusion lists can get "large" when your customer base is very large, e.g. thousands of manual overrides for every single flag.

- access controls: You'll want to dictate who can modify these flags. For example, some eng teams will want to allow their PMs to flip certain flags, while others will want certain flags hands off.

- auditing: When something goes wrong, you'll want to know who changed which flags and why.

- tracking/reporting: You'll want to see which feature flags are being actively used so you can help teams track down "dead" feature flags.

This list isn't exhaustive (just what I could remember off the top of my head), but you can start to see why they're an endeavor in and of themselves and why products like LaunchDarkly exist.


> if you're not just listing the nines on your system as a whole

At scale the nines of your feature flagging system become the nines of your company.

We have a massive distributed systems architecture handling billions in daily payment volume, and flags are critical infra.

Teams use flags for different things. Feature rollout, beta test groups, migration/backfill states, or even critical control plane gates. The more central a team's services are as common platform infrastructure, the more important it is that they handle their flags appropriately, as the blast radius of outages can spiral outwards.

Teams have to be able to competently handle their own flags. You can't be sure what downstream teams are doing: if they're being safe, practicing good flag hygiene, failing closed/open, keeping sane defaults up to date, etc.

Mistakes with flags can cause undefined downstream behavior. Sometimes state corruption (eg. with complicated multi-stage migrations) or even thundering herds that take down systems all at once. You hope that teams take measures to prevent this, but you also have to help protect them from themselves.

> slowly-changing, centrally-managed system state at runtime

With flags being so essential, we have to be able to service them with near-perfect uptime. We must be able to handle application / cluster restart and make sure that downstream services come back up with the correct flag states for every app that uses flags. In the case of rolling restarts with a feature flag outage, the entire infrastructure could go hard down if you can't do this robustly. You're never given the luxury of knowing when the need might arise, so you have to engineer for resiliency.

An app can't start serving traffic with the wrong flags, or things could go wrong. So it's a hard critical dependency to make sure you're always available.

Feature flags sit so closely to your overall infrastructure shape that it's really not a great idea to outsource it. When you have traffic routing and service discovery listening to flags, do you really want LaunchDarkly managing that?


Oh wow, today I learned about env -S - when I saw the shebang line in the article, I immediately thought "that doesn't work on Linux, shebang lines can only pass a single argument". Basically, running foo.py starting with

    #!/usr/bin/env -S uv run --script
causes the OS run really run env with only two arguments, namely the shebang line as one argument and the script's filename as the second argument, i.e.:

    /usr/bin/env '-S uv run --script' foo.py
However, the -S flag of env causes env to split everything back into separate arguments! Very cool, very useful.


It's frustrating this is not the same behavior on macOS: https://unix.stackexchange.com/a/774145


It seems to me that macOS has env -S as well, but the shebang parsing is different. The result is that shebang lines using env -S are portable if they don't contain any quotes or other characters. The reason is that, running env -S 'echo a b c' has the same behavior as running env -S 'echo' 'a' 'b' 'c' - so simple command lines like the one with uv are still portable, regardless of whether the OS splits on space (macOS) or not (Linux).


This is true. For example, the following shebang/uv header works on both macOS and Linux:

  #!/usr/bin/env -S uv --quiet run --script
  # /// script
  # requires-python = ">=3.13"
  # dependencies = [
  #     "python-dateutil",
  # ]
  # ///
  #
  # [python script that needs dateutil]


Very informative. Thank you!


True, this should be fine for straightforward stuff but extremely annoying as soon as you have for eg, quoted strings with whitespace in it which is where it breaks. Have to keep that difference in mind when writing scripts.

The link I posted in my original reply has a good explanation of this behavior. I was the one who asked the question there.



`brew install coreutils` and update your `PATH`.


I'm aware of this package for getting other utilities but:

1. I'm worried about this conflicting/causing other programs to fail if I set it on PATH. 2. This probably doesn't fix the shebang parsing issue I mentioned since it's an OS thing. Let me know if that's not the case.


You've got nothing to worry

Been doing it for more than a decade and yet to get in trouble. Not one issue. Doing it consistently for my teams as we decrease cognitive load (developing on macs but targeting unix). Others would confirm https://news.ycombinator.com/item?id=17943202

Basically software will either use absolute paths i.e. wants to use your OS version for a dependency like grep, or will use whatever grep is in your $PATH and stick to safe invocations regardless if it's BSD/GNU or if it's version x or y


Hmm, I haven’t run this experiment myself but I have in the past faced problems overriding default python/ruby commands in PATH that caused some stuff to fail and had to add some specific overrides for `brew` command, for example.

> Basically software will either use absolute paths

I’ve personally written scripts that break this assumption (that’s a me problem, I guess) so I am quite sure there’s a lot of scripts at the very least that do this.

Nevertheless, you’ve given me something to consider.


The PATH is irrelevant, this is about how the kernel parses the shebang. It starts exactly /usr/bin/env with two arguments, not some other env binary you might have in your PATH.


You can also brew install the gnu tools package and have both side by side for compatibility (gnu tools are prefixed with 'g', gls, gcat, etc

I have a script that toggles the prefix on or off via bash aliases for when I need to run Linux bash scripts on a mac.


And that on Android env doesn't live in /bin/.


Reminds me of how GNU Guile handles the one argument limitation - with "multi-line" shebang[1].

  #!/usr/bin/guile \
  -e main -s
  !#
turns into

  /usr/bin/guile -e main -s filename
Wonder why they bothered.

Probably env -S is a recent addition. Or not available on all platforms they cared about.

[1]: https://www.gnu.org/software/guile/manual/html_node/The-Meta...


Somewhat unrelated, I guess, but we used to use a split line shebang for tcl like the following

    #!/bin/sh
    # A Tcl comment, whose contents don't matter \
    exec tclsh "$0" "$@"
- The first line runs the shell

- The second line is treated like a commend by the shell (and Tcl)

- The third line is executed by the shell to run Tcl with all the command line args. But then Tcl treats it as part of the second line (a comment).

Edit: Doing a bit of web searching (it's been a while since I last had the option to program in Tcl), this was also used to work around line length limitations in shebang. And also it let you exec Tcl from your path, rather than hard code it.


I like using this with tusk, which is a golang cli a bit like make, but it uses yaml for the config. The shebang is

      #!/usr/bin/env -s go run github.com/rliebz/tusk@latest -f
Then use gosh a golang shell for the interpreter

      interpreter: go run mvdan.cc/sh/v3/cmd/gosh@latest -s
This makes it a cli can run anywhere on any architecture with golang installed


If the wrapper itself cooperates, you can also embed more information in the following lines. nix-shell for example allows installing dependencies and any parameters with:

    #!/usr/bin/env nix-shell
    #!nix-shell --pure -i runghc ./default.nix
    ... Any Haskell code follows



env -S should never have been necessary. The strange whitespace splitting rules of the shebang line is an old bug that has matured into an unfixable wart marring the face of Unix forever. Every time I have to use tricks like the above, I'm reminded that half an hour of work in the 1980s would have saved years of annoyance later. Shebang lines should have always split like /bin/sh.


If you need more than what shebang allows, you're probably better off writing a regular shell script and doing whatever you need in shell IMO.


Send your patches to Linux and BSD kernel mailing lists.


It cannot be fixed now. It would break thing.


Yeah, it is very useful and allows environment variables, so you can do

   /usr/bin/env -S myvar=${somevar} ${someprefix}/bin/myprogram
However, as another commenter wrote, support is not universal (looks present in RH8 but not RH7 for instance). Also, the max length of a shebang is usually limited to about 127 characters.

So sometimes you have to resort to other tricks, such as polyglot scripts:

   /usr/bin/sh
   """exec" python --whatever "$@"
   Well this is still a Python docstring
   """
   print("hello")

Or classically in Tcl:

   #! /usr/bin/sh
   # Tcl can use \ to continue comments, but not sh \
   exec tclsh "$@" # still a comment in Tcl
   puts "hello"
Such things are not usually needed, until they are, and they make for fun head-scratching moment. I would personally recommend against them if they can be avoided, as they are relatively fragile.

I'll leave the self-compiling C language script "shebang" as an exercise to the reader ;)


Yeah unfortunately support for that is kind of spotty, so don't do this in any scripts you want to work everywhere.


It would be nice if uv had something like uvx but for scripts...uvs maybe? Then you could put it as a single arg to env and it would work everywhere.


Yeah, my first reaction was cool, what’s uv

Oh, yet another python dependency tool. I have used a handful of them, and they keep coming

I guess no work is important enough until it gets a super fast CLI written in the language du jour and installed by piping curl into sh


I believe parent comment was about `env -S` not being portable rather than `uv` being portable.

I'll say, I am as pessimistic as the next person about new ways to do X just to be hip. But as someone who works on many different Python projects day to day (from fully fledged services, to a set of lambdas with shared internal libraries, to CI scripts, to local tooling needing to run on developer laptops) - I've found uv to be particularly free of many sharp edges of other solutions (poetry, pipenv, pyenv, etc).

I think the fact that the uv tool itself is not written in Python actually solves a number of real problems around bootstrapping and dependency management for the tool that is meant to be a dependency manager.


> I think the fact that the uv tool itself is not written in Python

It's interesting that the language people choose to write systems with (Python) is basically identified as not the best language to write systems to support that language (Python).

To my knowledge, no other mainstream language has tooling predominantly written in another language.


Javascript has quite a lot of tooling written in other (better) languages.

I think Javascript and Python stand out because they are both extremely popular and also not very good languages, especially their tooling. You're obviously going to get a load of people using Javascript and Python saying "why is Webpack/Pip so damn slow? I don't have any choice but to use this language because of the web/AI/my coworkers are idiots, so I may as well improve the tooling".


I believe quite a bit of the JS tooling has been rewritten in other languages over the last decade or so


It's important to use any other language to avoid even the theoretical possibility of bootstrapping complications. All languages that produce self-contained compiled executables are equally suitable for the task.


gcc is written in C++ for like a decade now, so it's not completely unusual.


It's a compiler for C, C++, and adjacent languages though.


Sure, but the point was a tool written in a different language. So the c part of gcc is another example of this issue.


It's, er, "funny" how people used to make fun of `curl | sh` because of how lame it was, and now you have it everywhere because Rust decided that this should be the install.


There's a recent rust available in Fedora. That's what I use.


You can also install rustup via your package manager and then use it as usual to manage your Rust installations. Though I guess in most cases, a single moderately recent Rust installation works just fine. But it's useful if you want/need to use Rust nightly for example.


> now you have it everywhere because Rust decided that this should be the install.

No, now you have it everywhere because the Linux community completely failed to come up with anything better.


What do you mean? Linux has multiple excellent packaging solutions, apt/dpkg, yum, PKGBUILD/pacman

If a developer can't be arsed to package their software, it's not the Linux community's fault


uv is the tool, finally. We've been waiting for two decades and it really does basically everything right, no ifs or buts. You can scratch off the 'yet another' part.


uv is not just a dependency tool. uv deals well with packages and dependency management (well), venvs, runtimes, and tools. It replaces all the other tools and works better in just about every way.


> Oh, yet another python dependency tool. I have used a handful of them, and they keep coming

Yeah that's my opinion of all the other Python dependency tools, but uv is the real deal. It's fast, well designed, it's actually a drop-in replacement for pip and it actually works.

> I guess no work is important enough until it gets a super fast CLI written in the language du jour and installed by piping curl into sh

Yeah it's pretty nice that it's written in Rust so it's fast and reliable, and piping curl into sh makes it easy to install. Huge upgrade compared to the rest of Python tooling which is slow, janky and hard to install. Seriously the official way to install a recent Python on Linux is to build it from source!

It's a shame curl | bash is the best option we have but it does at least work reliably. Maybe one day the Linux community will come up with something better.


What we have now, a load of different people developing a load of new (better!) tools, is surely what the PyPA had in mind when they developed their tooling standards. This is all going to plan. We've gotten new features and huge speedups far quicker this way.

I don't like changing out and learning new tools constantly, but if this is the cost of the recent rapid tooling improvements then it's a great price.

And best of all, it's entirely optional. You can even install it other ways. What exactly was your point here?


No love for the <plaintext> tag? "The <plaintext> HTML element renders everything following the start tag as raw text, ignoring any following HTML. There is no closing tag, since everything after it is considered raw text." - it's my favorite obscure deprecated HTML tag.


Fun fact: this is very close but slightly inaccurate. I used to think this is how it worked before scrutinizing a rule in the HTML tree-building specification.

The tag leads the parser to interpret everything following it as character data, but doesn’t impact rendering. In these cases, if there are active formatting elements that would normally be reconstructed, they will after the PLAINTEXT tag as well. It’s quite unexpected.

  <a href="https://news.ycombinator.com"><b><i><u><s><plaintext>hi
In this example “hi” will render with every one of the preceding formats applied.

https://software.hixie.ch/utilities/js/live-dom-viewer/?%3Ca...

After I discovered this the note in the spec was updated to make it clearer.

  https://html.spec.whatwg.org/multipage/parsing.html#:~:text=A start tag whose tag name is "plaintext"


I'm terrified of opening a paren andforgetting to close it! How terrifying to find a tagged paren that cannot be closed!

"please accept from me this unpretentious bouquet of early-blooming" <plaintext>s


It was an easy way to use an existing plain-text document where HTML was expected.

https://datatracker.ietf.org/doc/html/draft-ietf-html-spec-0...

The PLAINTEXT element was replaced by the LISTING element (which was itself deprecated in HTML 3.2): https://datatracker.ietf.org/doc/html/rfc1866#section-5.5.2....


It's not deprecated. It's obsolete and totally removed from the HTML standard since HTML4.


What in the world was the intended use for that?


same as <pre no?


FIM is "fill in middle", i.e. completion in a text editor using context on both sides of the cursor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: