Hacker News new | past | comments | ask | show | jobs | submit login

What scenarios/filesystems lead to these kinds of conflicts?



Well the issue I ran into was that I wanted a SQLite db to have one writer and many readers. However, what I was seeing was that queries were failing because the DB was locked at the time of reading. The solution is to switch journaling mode to WAL[1].

However the particular platform I was developing for was an embedded system using the JFFS2 filesystem. JFFS2 doesn't support shared mmap, which is a feature SQLite needs for WAL.

So basically, it was big pain - I had to setup a tmpfs mount point where the WAL-enabled DB resided and then periodically sync it to flash.

[1] https://www.sqlite.org/wal.html


Why didn't you just do more granular writes and let the different processes lock the file? SQLite is impressively fast, it can handle a lot before you need a full database.


I don't know how to do that.

Basically there is one process periodically dumping stats into a sqlitedb. There is an HTTP API that can retrieve said stats. How do I make it so the HTTP API doesn't occasionally fail when trying to read from the DB?


Retry until it gets the lock and don't hold the lock longer than you need to to write to the db.

Really though if you are trying to read and write to a db concurrently and over the network, it sounds much more clear cut for something like postgres.


I don't really like the idea of a retry loop. Why doesn't sqlite have an API to just block until the DB is available?


You asked the question and now you don't like the 'idea' of the answer. Use real numbers instead of feels, and again if you need real concurrency, use a real database.


"Retry" is not a solution; it's a workaround. Imagine if that's how opening files worked: "Oh yeah, if the file fails to open just retry a few times until it works."

There's a small chance even retrying 3 times you could fail all 3 times in a row. WAL journaling is a solution.


Not exactly a filesystem, but it doesn't work on SMB shares, which is an enormous pain in the ass. And I'm not even talking about concurrent access - just one user. I get that SMB file locking is a mess, but making it a hard fail by default was a big mistake IMO as most develoers using the library don't make special cases to allow this.


The documentation is very clear than SQLite over a network isn't a good use case, for reasonably sized data. Item 1 from "Checklist for choosing the right database engine" [1]

> 1. Is the data separated from the application by a network? → choose client/server

> Relational database engines act as bandwidth-reducing data filters. So it is best to keep the database engine and the data on the same physical device so that the high-bandwidth engine-to-disk link does not have to traverse the network, only the lower-bandwidth application-to-engine link.

> But SQLite is built into the application. So if the data is on a separate device from the application, it is required that the higher bandwidth engine-to-disk link be across the network. This works, but it is suboptimal. Hence, it is usually better to select a client/server database engine when the data is on a separate device from the application.

1. https://www.sqlite.org/whentouse.html




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: