Professional procrastinator here. All the tips in the post are common knowledge and, sadly, not very good (especially in the long run).
The only thing that works for many people is to skip the motivation part and embrace the rather uncomfortable principle of "action before motivation."
The flow state will come. I believe it arises independently of motivation. Motivation just tricks us into believing that everything we do should bring joy.
It will — but not right now; we need to dive in first.
yeah its sort of the no pain no gain approach but I like it. my best hack is time logging. I feel that if I specifically label breaks or procrastination or production failures, I at least feel like I'm more in control. I like the act of blogging about it though, it never fails to pick me up knowing I'm not alone with suffering from procrastination and I am always happy to read thoughts about it.
If we only have two files, A and its duplicate B with some changes as a diff, this works pretty well. Even if the user deletes A, the OS could just apply the diff to the file on disk, unlink A, and assign B to that file.
But if we have A and two different diffs B1 and B2, then try to delete A, it gets a little murkier. Either you do the above process and recalculate the diff for B2 to make it a diff of B1; or you keep the original A floating around on disk, not linked to any file.
Similarly, if you try to modify A, you'd need to recalculate the diffs for all the duplicates. Alternatively, you could do version tracking and have the duplicate's diffs be on a specific version of A. Then every file would have a chain of diffs stretching back to the original content of the file. Complex but could be useful.
It's certainly an interesting concept but might be more trouble than it's worth.
ZFS does this by de-duplicating at the block level, not the file level. It means you can do what you want without needing to keep track of a chain of differences between files. Note that de-duplication on ZFS has had issues in the past, so there is definitely a trade-off. A newer version of de-duplication sounds interesting, but I don't have any experience with it: https://www.truenas.com/docs/references/zfsdeduplication/
VAST storage does something like this. Unlike how most storage arrays identify the same block by hash and only store it once VAST uses a content aware hash so hashes of similar blocks are also similar. They store a reference block for each unique hash and then when new data comes in and is hashed the most similar block is used to create byte level deltas against. In practice this works extremely well.
ZFS: "The main benefit of deduplication is that, where appropriate, it can greatly reduce the size of a pool and the disk count and cost. For example, if a server stores files with identical blocks, it could store thousands or even millions of copies for almost no extra disk space." (emphasis added)
APFS shares blocks so only blocks that changed are no longer shared. Since a block is the smallest atomic unit (except maybe an inode) in a FS, that’s the best level of granularity to expect.
I also created and maintain a Russian "newspeak" dictionary: https://github.com/alamzin/az/