Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since random accesses across the internet are really slow, for this kind of fairly small table (where SQLite stores the row data inline within the B-Tree of the table) it basically fetches the whole content for each row - so even if you query only the long_name and country_code column, it will in fact fetch the data of all 29 columns in that table.

If you want it to fetch less data for querying a subset of columns, you could create create an index on those columns - then SQLite will do an COVERING INDEX scan and thus read only the necessary data (with the B-Tree itself and the start / end page alignment being the only overhead).



Nothing to add to the conversation, just wanted to say I absolutely adore this. Years ago I used to think it'd be nice if search engines simply exposed all their shards to clients and let clients do all the results merging and suchlike. Of course that's probably a terrible idea in practice for performance, but this library is definitely implementing the spirit in a very practical way!

It also reminded me of a vague inverse of this hack. In old versions of Qemu (possibly it is still implemented, but I have vague memories it got ripped out), you could point Qemu at a directory on disk and it'd produce an emulated floopy disk drive with a virtual FAT12 image containing the directory contents. AFAIK it didn't keep the actual data in memory, I guess all it needed was file sizes to know how to build a virtual memory mapping that contained the filesystem metadata + proxied reads from the underlying files for data sectors. I look forward to seeing your implementation of this concept in a virtualized SQLite file <-> GraphQL proxy ;)

edit: insane, it still exists and apparently supports write mode?! https://en.wikibooks.org/wiki/QEMU/Devices/Storage#Virtual_F...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: