If you say using SQLite, there is always risk of losing that data file in disk right. How is that managed. is this a dumb question , which PostgreSQL do not ??
The best answer I know of to this question is Litestream - https://litestream.io/ - you can use it to inexpensively replicate a backup of your database up to an S3 bucket (or various other storage providers) to give you a very solid robust recovery option should you lose the disk that your SQLite database lives on.
The big question here would be whether the filesystem that the data file resides on is mounted on a single-disk volume or a RAID-volume (or even a clustered SDS volume such as Ceph). On a single-disk volume, if the disk dies your DB is gone (at least until you can replace the disk and restore the backup). On a sensibly designed RAID-volume, if one of the disks dies the application's admin probably would never even know. The sysadmin would see it in their logging/alerting infrastructure and replace the dead disk. There would not be any loss of data or availability of the data (just a temporary degradation in the overall resiliency). Same is true for a clustered SDS volume such as Ceph.
Not only that, the easy mode we all doing now is "run the application in a major cloud". Besides AWS, They all promise 99.999% or above durability for their block storage offering.
Of course you still need backup for other incidents.
In combination with a base backup, you can use streaming WAL backups with Postgres, shipping compressed WAL files to S3 or wherever. I like WAL-G for this.
There are lower-level approaches built-in to SQLite, such as the Backup API, but there is at least one 3rd party project, Litestream, which does streaming WAL backups too.