A simple statement by the maintainers of nginx stating how to configure so that a desync attack fails. That would have been helpful. Especially since the people behind the desync attack claim nginx is not invulnerable.
I've got no idea who F5 is. They seem legit, but that page didn't show up in my DDG search. But it's too late now. Water under the bridge.
3 of 7 work at EDB, and the core team doesn’t drive the project roadmap. And EDB hackers fail to get patches in all the time, just like everyone else :)
Regarding #1, NGINX has created a project to make ACME integration easier. It is quite new, so I doubt it will replace your use of Caddy, but it is worth consideration.
Callers need to exercise a fairly high level OPSEC to maintain anonymity. If you aren’t using some VOIP service, there’s a good chance your call will be traced and the cops will be at your door anyway.
Disclaimer: I am one of the authors of the project.
I do wish that NGINX made LetsEncrypt as easy as to use as Caddy does. We are all big fans of LetsEncrypt and are quite happy to see NGINX donating to the project.
In this project (MARA), LetsEncrypt support is integrated via [Cert Manager](https://cert-manager.io/) for Kubernetes. This is nice because it supports certs from a variety of issuers like AWS, Google, Vault, Cloudflare, etc in addition to Let's Encrypt.
SeaweedFS and this project have different purposes. This project is intended to show off how to configure NGINX to act as a S3 proxying gateway by using [njs](https://nginx.org/en/docs/njs/). If you look at the github for it, you will see it is just a collection of nginx config and javascript files. This all works will standard open source NGINX. All it does is proxy files like a L7 load balancer, but in this case, it adds AWS v2/v4 headers to the upstream requests.
As for caching, that is totally configurable to whatever you want; the example configuration is set to 1 hour but that is arbitrary. In fact, one of the interesting this is all of the additional functionality that can be enabled because the proxying is being done by NGINX.
Regarding read and write, that can be enabled for AWSv2 signatures, but it is more difficult to do in AWSv4 signatures. I have an idea about how to accomplish it with v4 signatures, but it will take some time to prototype it.
SeaweedFS is very different from Nginx. It's just the names are so similar.
There are 2 ways to cache: write through and write back. You are using write through, which needs to write to the remote storage before returning. Write back is only writing to local copy, which is much faster to return. The actual updates are executed asynchronously.
This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?