This depends a lot on the usecase. You have to consider things beyond the simple serving of requests, things like rate-limiting, caching, logging, event tracking, for example.
About a year ago I designed a system for a client that was just Postgres with a Go http frontend. Go was used to handle http requests and responses, to translate the API from http to postgres functions/views, and serve the response straight from postgres. Even authentication and access control was handled using postgres's role system and RLS was looked into (and found to be viable for when needed). Postgres was a good choice because the project involved mainly a lot of data wrangling. Of course, I could do it this way because it was a completely internal system with a fixed number of users, and the burden of maintenance was minimal.
Main reasons for not going this direction, IMO, would be:
1. Developer proficiency, while building and maintaining. Far more people know Python, Ruby, JavaScript etc. than SQL. Far more people know how to think in imperative programming than to think in data models. Of course, one can write Postgres-hosted imperative programs as functions (in languages ranging from plpgsql to javascript), but at that point using the same language in a node or django app is much easier.
2. Unsuitability. Some things are just not suited to be run in a database. Realtime multiplayer games, multiple-source data compositing APIs, view rendering etc. come to mind. The truth remains, that general purpose programming languages (or special purpose, where the purpose is serving, languages) and environments have more possibilities than an environment that grew around dealing with data.
About a year ago I designed a system for a client that was just Postgres with a Go http frontend. Go was used to handle http requests and responses, to translate the API from http to postgres functions/views, and serve the response straight from postgres. Even authentication and access control was handled using postgres's role system and RLS was looked into (and found to be viable for when needed). Postgres was a good choice because the project involved mainly a lot of data wrangling. Of course, I could do it this way because it was a completely internal system with a fixed number of users, and the burden of maintenance was minimal.
Main reasons for not going this direction, IMO, would be:
1. Developer proficiency, while building and maintaining. Far more people know Python, Ruby, JavaScript etc. than SQL. Far more people know how to think in imperative programming than to think in data models. Of course, one can write Postgres-hosted imperative programs as functions (in languages ranging from plpgsql to javascript), but at that point using the same language in a node or django app is much easier.
2. Unsuitability. Some things are just not suited to be run in a database. Realtime multiplayer games, multiple-source data compositing APIs, view rendering etc. come to mind. The truth remains, that general purpose programming languages (or special purpose, where the purpose is serving, languages) and environments have more possibilities than an environment that grew around dealing with data.