I'd think that all of them would be solved by time. I'm just saying they are nothing to be concerned with until they are (and we are still far way from that point in time). They are just fancy demos to make a company look attractive to investors.
Why the obsession (it seems to be the prominent point in the readme) with configuration via API? How often do you need to add php support on the fly? I want to configure my app server via files so it just starts up in the state that I expect. What am I missing?
Probably the most common use case is SaaS providers that support custom domain name for whatever the software it is. For example, a site uptime monitoring service might offer a feature to host a status page on a custom (sub)domain of the customer. The SaaS now needs to programatically create virtual hosts on demand, issue HTTPS certificates, run routine updates, etc.
An API and a web server with small segmented updates make this so much easier. Compare this to Apache, that has to wait to properly end existing connections before reloading, has a config file parsing overhead, and probably does not scale that well with several virtual hosts anyway. There are hardware/File System level limitations as well.
> How often do you need to add php support on the fly?
Restarting the binary means you'll lose requests while it's restarting, so adding php (or whatever) support on the fly is what you need when running a system where losing those requests is material. Which it won't be for most people, but for, eg, Google (who don't use Nginx), losing those requests is a problem.
Although it doesn't directly go against what you're saying, many Unix daemons have supported HUP signals for decades which can achieve the same outcome. No need to configure via API, just change the configuration on disk and send HUP.
I suppose arguably that becomes a bit trickier for containers, so perhaps that's why you'd want to configure via an API?
There is a slight difference to me as having another service run (the API) is an additional attack vector to worry about from a security perspective.
I would also say it's easier to enforce good "IaC hygiene" when the configs are managed via configuration files. They can go through a code review process, deployed via existing config management systems etc.
Any half decent containerized setup should support zero-downtime deploys. Usually it involves bringing up new containers and signaling the existing containers to begin draining connections.
For most workloads it should be entirely possible to deploy a new stateless config and not need to resort to using mutable state for critical infrastructure.
If you have long-lived, stateful connections (perhaps for live streams) then I can see why re-configuring in place would be desirable, but in my experience that's pretty rare.
Neither Apache nor nginx require a restart to add php support, and neither will lose requests under normal operation. They will however parse the complete config on a reload operation. On huge configurations this is noticeable.
>Why the obsession (it seems to be the prominent point in the readme) with configuration via API?
Infrastructure As Code (in all its forms, chef/puppet/ansible/tfe etc.) is the standard for all enterprise cloud setups these days. It makes sense to support that as a first class feature.
I think they intend for you to run something idempotent like Ansible that coerces configs into the intended state.
In this case, I think Nginx is trying to avoid the in-fighting that happens between platform teams and the various teams that they serve. Most platforms with static configuration don't make it easy to ACL the platform so that one user can't mess with another user. E.g. it would be hard to automatically prevent one user from trying to take another's domain name.
This config API could be ACL'ed so that you can update your application code, but not change it's domain name or IP. Or whatever else the platform team wants to cut off. This hopefully comes with ACLs like that, but if it doesn't you could always add it in a standard way with a reverse proxy that has ACLs. That's a lot easier to do than trying to write an nginx config parser to make sure platform users aren't tinkering with a particular setting in their config.
it’s not hard to imagine a use case where there are backend configurations stored in a database somewhere and you want to apply them. i’m picturing it as data vs static configuration.
Why? To scale infra of my offering to my customers. They need it too. I’d like for my remaining customers to not suffer downtime. I’d like to use the existing infra I have without spinning up new ones. I’d like to offer a dashboard so my customer can configure their host.
If the state you need at startup isn't the same as you need for production, this could be incredibly useful. It also means that you can save a lot of time starting and stopping containers for many common configuration changes in production. There's a lot more utility in this than just PHP.
Perhaps adding support for PHP on the fly is an extreme case, but reconfiguring eg load balancer backends when new systems come and go without having to render a config file and HUP (and hope) is a typical case.