Which, for this use case - bulk generating a read-only data file from a source dataset in a batch job - seems like a pretty good performance tradeoff, no? In the case of a failure of some kind - no big deal, just restart the process anew.
This is one of those use cases where SQLite isn’t replacing <database> - it’s replacing fopen.
Exactly, this use case is write-once then read-only after that. Random updates to the sqlite file after that would need more “normal” settings, but they wouldn’t also need as much throughput.
https://github.com/onthegomap/planetiler/blob/db0ab02263baaa...
It batches inserts into bulk statements and is able to do writes in the 500k+ per second range, and reads are 300-400k/s using those settings.