Unless I'm misunderstanding something, I do think using SQLite makes a significant difference in the complexity of app deployment. When using multi-region Postgres, it's true that you only want the Postgres leader to be accessed by app instances in the same region, so the app instances all have to know which region is running the leader. But multiple app instances in that region can connect to that Postgres leader, so it's easy to do a typical rolling deploy. With SQLite, only one app instance at a time can write to the database, so IIUC, there will have to be a reliable way of doing failover with every app deploy. I suppose the same thing has to happen in the Postgres scenario when updating Postgres itself, but that's way less frequent than deploying new versions of the app.
You can have two app instances writing to the database concurrently, as long as they are running on the same machine. Then it’s possible to deploy a new version without downtime by starting the new binary, switching traffic to the new instance, draining connections to the old instance, and shut it down. That’s kind of how nginx for example upgrades itself without downtime. That’s not the usual way nowadays with containers and VMs, but it was very common before. It’s definitely doable with SQLite, and not very complex.
No. The number of processes requests in unchanged. They are just dispatched between 2 instances: the new one and the old one. Actually, all new requests are going to the new instance, and only the pending requests are processed by the old instance.
Agreed, RAM usage may be slightly elevated during the switch. But it doesn’t seem to be a big issue in practice. nginx for example is updating itself like this and it works fine.