I save having to deal with a full-blown DBMS daemon accessed through a network boundary unless I need this, and I have approximately zero pain, because I read the documentation and set up my database schemas to work on both solutions (and use WALs so sqlite is rock stable). I also know how to open a file on the command line if I need raw database access for some reason (very rare with django anyway, django's console is just too good not to use it). I also design my apps to not deadlock on unnecessarily long transaction and don't turn every request into a database write, so I can scale out pretty far before I have to worry about write performance. And if I do, I can still use postgres. Until then, I can do unified, consistent backups of all state by snapshotting the filesystem containing uploads and sqlite files.
So I dunno why people insist on spreading so much FUD.
It's not FUD. For all the trouble you claim to have with Postgres I experienced 0 of it in the last 4 years. The only thing extra for a simple setup is a couple of lines in your docker compose files which is completely amortised because you already have a multi process architecture with Python anyway (proxy + webserver + web server works). The upfront cost is so small that for me the expected total cost will rarely make sense even if you assume that your application has 1% of chance of scaling beyond what you can do with sqlite.