What's the point of this? If you have multiple applications on multiple systems accessing the same DB, it seems to make more sense to just use PostgreSQL, since it's specifically designed for concurrent operation like this, instead of trying to handle this in your own custom backend code.
If you have multiple applications on different servers communicating with the same database, then yes you would need to run a database such as PostgreSQL.
If you run a single application on a server that needs a database you might want to consider SQLite, regardless of your needs for concurrency/concurrent writes.
Having an API in front of a database for full control of what is available is extremely common.
> trying to handle this in your own custom backend code
It's not writing some extra custom code, it's simply locating all of the code which interacts directly with your database on one host. Splitting up where your code is so that what would be function calls in some places if everybody interacted with the database are instead API calls. This kind of organizational decision is not at all unusual.
And if you're using SQLite it's probably because your application is simple and you should have some pushback anyway on people trying to mAkE iT WEBScaLE!! (can I still make this joke or has everybody forgotten?)
A lot of premature optimizers get very worried about concurrency and scalability on systems which will never ever have concurrent queries or need to be scaled at all. I remember making fun of developers running "scalable" Hadoop nonsense on their enormous clusters which cost more than my yearly salary to run by reimplementing their code with cut and grep on my laptop at a 100x speedup.
I've worked places where a third of our cloud budget was running a bunch of database instances which were not even 5% utilized because folks insisted on all of these database benefits which weren't ever going to be actually needed.
It's not a particularly unusual situation: it's very common for a database to effectively be entirely owned by an application which manages its own constraints on top of that database. In that circumstance sqlite is pretty interchangable with other databases.
It’s not unusual but is never performant. Adding an api layer and network hops on what should be a database shard or view is why enterprise software sucks so much ass.
Why does the api take 3s to respond? Well it needs to call 6 other apis all of which manager their own data. The problem compounds over time. APIs are not the way to solve cross organization data concerns.
You’ve fully misunderstood what I said. When you have 500 applications, the graph of calls for how any one api resolves will go deep. Api1 calls 2 calls 3 and so on.
Vs creating an organization wide proper way to share and manage data.
The number of applications doesn't need to create depth in the API layer. They're not related. If I have a service that sends emails, whether I have one or a thousand applications calling it doesn't matter.