This is, in fact, the biggest problem to solve with any kind of compute platform. And when you suddenly launch things really, really fast, it gets harder.
For what it's worth, I do this from about 50 different IPs and have had no issues. I think their heuristics are more about confirming "a human is driving this" and rejecting "this is something abusing tokens for API access".
Sandboxes with the right persistence and http routing make excellent dev servers. I have about a million dev servers I just use from whatever computer / phone I happen to be using.
It's really useful to just turn a computer on, use a disk, and then plop its url in the browser.
I currently do one computer per project. I don't even put them in git anymore. I have an MDM server running to manage my kids' phones, a "help me reply to all the people" computer that reads everything I'm supposed to read, a dumb game I play with my son, a family todo list no one uses but me, etc, etc.
Immediate computers have made side projects a lot more fun again. And the nice thing is, they cost nothing when I forget about them.
You will be astonished to know it'a a whole lot of sqlite.
Everything I want to pay attention to gets a token, the server goes and looks for stuff in the api, and seeds local sqlites. If possible, it listens for webhooks to stay fresh.
Mostly the interface is Claude code. I have a web view that gives me some idea of volume, and then I just chat at Claude code to have it see what's going on. It does this by querying and cross referencing sqlite dbs.
I will have claude code send/post a response for me, but I still write them like a meatsack.
It's effectively: long lived HTTP server, sqlite, and then Claude skills for scripts that help it consistently do things based on my awful typing.
FUSE generally has low overall performance because of an additional data transfer process between the kernel space and user space, which is less than ideal for AI training.
Shell environments are by far the most difficult part of building a stateful sandbox with checkpoints and restores. It's bananas. This will be fixed soon.
FUSE is full of gotchas. I wouldn't replace NFS with JuiceFS for arbitrary workloads. Getting the full FUSE set implemented is not easy -- you can't use sqlite on JuiceFS, for example.
The meta store is a bottleneck too. For a shared mount, you've got a bunch of clients sharing a metadata store that lives in the cloud somewhere. They do a lot of aggressive metadata caching. It's still surprisingly slow at times.
I want to go ahead and nominate this for the understatement of the year. I expect that 2026 is going to be filled with people finding this out the hard way as they pivot towards FUSE for agents.
It depends on what level of FUSE you're working with.
If you're running a FUSE adapter provided by a third party (Mountpoint, GCS FUSE), odds are that you aren't going to get great performance because it's going to have to run across a network super far away to work with your data. To improve performance, these adapters need to be sure to set fiddly settings (like using Kernel-side writeback caching) to avoid the penalty of hitting the disk for operations like write.
If you're trying to write a FUSE adapter, it's up to you to implement as much of the POSIX spec that you need for the programs that you want to run. The requirements per-program are often surprising. Want to run "git clone", then you need to support the ability to unlink a file from the file system and keep its data around. Want to run "vim", you need the ability to do renames and hard links. All of this work needs to happen in-memory in order to get the performance that applications expect from their file system, which often isn't how these things are built.
Regarding agents in particular, I'm hopeful that someone (which is quite possibly us), builds a FUSE-as-a-service primitive that's simple enough to use that the vast majority of developers don't have to worry about these things.
> you need to support the ability to unlink a file from the file system and keep its data around. Want to run "vim", you need the ability to do renames and hard links
Those seem like pretty basic POSIX filesystem features to be fair. Awkward, sure... there's also awkwardness like symlinks, file locking, sticky bits and so on. But these are just things you have to implement. Are there gotchas that are inherent to FUSE itself rather than FUSE implementations?
These are basic POSIX features, but I think the high-level point that Kurt is trying to make is that building a FUSE file system signs you up for a nearly unlimited amount of compatibility work (if you want to support most applications) whereas their approach (just do a loopback ext4 fs into a large file) avoids a lot of those problems.
My expectations are that in 2026 we will see more and more developers attempt to build custom FUSE file systems and then run into the long tail of compatibility pain.
tl;dr it doesn't. I'm not sure what they're planning in this capacity (I haven't checked out sprites myself), but I would guess that it's going to be a function of "snapshots" as a mechanism to give multiple clients ephemeral write access to the same disk.
It's tiered, they have local nvme that gets written back to object storage.
npm install hasn't bothered me, but I know of people with massive npm issues that would like faster first installs. Fortunately, it's incrementally quicker after that.
The storage performs pretty well for running claude + my dev. It'll improve immensely in the next few months, though. We should be able to get near native NVMe speeds for the working storage set on reads/writes/flush/fua.
Ok so, "running" sprite status has had some cache consistency issues. You're not being charged for idle sprites, but they may show as "running" even when you're not using them. The UX has improved, and it reliably shows what you expect. Some of the existing sprites need an environment upgrade, but you'll see those improve over the next few days.