(Also building a product in the productivity space, with an option for users to self-host the backend)
That's interesting, for us there was actually no trade-off in that sense. Having operated another SaaS with a lot of moving parts (separate DB, queueing, etc), we came to the conclusion rather early on that it would save us a lot of time, $ and hassle if we could just run a single binary on our servers instead. That also happens to be the experience (installation/deployment/maintenance) we would want our users to have if they choose to download our backend and self-host.
Just download the binary, and run it. Another benefit is that it's also super helpful for local development, we can run the actual production server on our own laptop as well.
We're simply using a Go backend with SQLite and local disk storage and it pretty much contains everything we need to scale, from websockets to queues. The only #ifdef cloud_or_self_hosted will probably be that we'll use some S3-like next to a local cache.
I think that's great if you prefer to operate a service like that. Operating for one customer vs many can often have different requirements, and that's where it can make sense to start splitting things out, but if the service can be built that way then that's the best of both worlds!
This is probably where the real trade-off is, and for that it's helpful to look at what the actual failure modes and requirements are. Highly available is a range and a moving target of course, with every "extra 9" getting a lot more complicated. Simply swapping SQLite for another database is not going to guarantee all the nines of uptime for specific industry requirements, just like running everything on a single server might already provide all the uptime you need.
In our experience a simple architecture like this almost never goes down and is good enough for the vast majority of apps, even when serving a lot of users. Certainly for almost all apps in this space. Servers are super reliable, upgrades are trivial, and for very catastrophical failures recovery is extremely easy: just copy the latest snapshot of your app directory over to a new server and done. For scaling, we can simply shard over N servers, but that will realistically never be needed for the self-hosted version with the number of users within one organization.
In addition, our app is offline-first, so all clients can work fully offline and sync back any changes later. Offline-first moves some of the complexity from the ops layer to the app layer of course but it means that in practice any rare interruption will probably go unnoticed.
> just copy the latest snapshot of your app directory over to a new server and done.
In practice, wouldn't you need a load balancer in front of your site for that to be somewhat feasible? I can't imagine you're doing manual DNS updates in that scenario because of propagation time.
Sure, this is just about the syncing backend. Our "IDE" frontend is written in vanilla JavaScript, which the Go backend can even serve directly by embedding files in the Go binary, using Go's embed package. Everything's just vanilla JS so there are no additional packages or build steps required.
Are you using something like Wails or Electron to wrap this into a desktop app? I’ll have to sign up for Thymer and check it out!
I’ve been prototyping my app with a sveltekit user-facing frontend that could also eventually work inside Tauri or Electron, simply because that was a more familiar approach. I’ve enjoyed writing data acquisition and processing pipelines in Go so much more that a realistic way of building more of the application in Go sounds really appealing, as long as the stack doesn’t get too esoteric and hard to hire for.
Not GP, but someone else who's also building with Go+SvelteKit. I'm embedding and hosting SvelteKit with my Go binary, and that seems to be working well so far for me. Took some figuring out to get Go embedding and hosting to play nicely with SvelteKit's adapter-static. But now that that's in place, it seems to be reliable.
Yeah, I think keeping infra simple is the way to go too. You can micro-optimize a lot of things, but this doesn’t really beat the simplicity of just running SQLite, or Postgres, or maybe Postgres + one specialized DB (like ClickHouse if you need OLAP).
S3 is pretty helpful though even on self-hosted instances (e.g. you can offload storage to a cheap R2/B2 that way, or put user uploads behind a CDN, or make it easier to move stuff from one cloud to another etc). Maybe consider an env variable instead of an #ifdef?
That's interesting, for us there was actually no trade-off in that sense. Having operated another SaaS with a lot of moving parts (separate DB, queueing, etc), we came to the conclusion rather early on that it would save us a lot of time, $ and hassle if we could just run a single binary on our servers instead. That also happens to be the experience (installation/deployment/maintenance) we would want our users to have if they choose to download our backend and self-host.
Just download the binary, and run it. Another benefit is that it's also super helpful for local development, we can run the actual production server on our own laptop as well.
We're simply using a Go backend with SQLite and local disk storage and it pretty much contains everything we need to scale, from websockets to queues. The only #ifdef cloud_or_self_hosted will probably be that we'll use some S3-like next to a local cache.