Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I spent a while optimizing sqlite inserts for planetiler, this is what I came up with:

https://github.com/onthegomap/planetiler/blob/db0ab02263baaa...

It batches inserts into bulk statements and is able to do writes in the 500k+ per second range, and reads are 300-400k/s using those settings.



You’re eliminating the “D” from “ACID” with these settings, and risk data loss on power failure and/or entire database corruption.

I can make a car go really fast if I eliminate the weight of having safety equipment on it like a bumper, seatbelt and airbag.


Which, for this use case - bulk generating a read-only data file from a source dataset in a batch job - seems like a pretty good performance tradeoff, no? In the case of a failure of some kind - no big deal, just restart the process anew.

This is one of those use cases where SQLite isn’t replacing <database> - it’s replacing fopen.


Exactly, this use case is write-once then read-only after that. Random updates to the sqlite file after that would need more “normal” settings, but they wouldn’t also need as much throughput.


It would be nice to have a compendium of SQLite use-cases and their respective configurations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: