Not OP, but I have been using borg backup [1] against Hetzner Storage Box [2]
Borg backup is a good tool in my opinion and has everything that I need (deduplication, compression, mountable snapshot.
Hetzner Storage Box is nothing fancy but good enough for a backup and is sensibly cheaper for the alternatives (I pay about 10 eur/month for 5TB of storage)
Before that I was using s3cmd [3] to backup on a S3 bucket.
Just this weekend, my backup tool went rogue and exhausted quota on rsync.net (Some bad config by me on Borg.) Emailed them, they promptly added 100 GB storage for a day so that I could recover the situation. Plus, their product has been rock solid since a few years I've been using them.
If you don't, a good heuristic would be to see how much you pay per GB - if it's less than a cent, you probably did. The ones that come with support are typically a shade above per a cent per GB
Just a note of caution: sync != backup. When I was younger and dumber, I had my own rsync cron script to do a nightly sync of my documents to a remote server. One day I noticed files were gone from my local drive; I think there were block corruptions on the disk itself, and the files were dropped from the filesystem, or something like that. The nightly rsync propagated the deletions to the remote "backup."
Depending on the usecase, rclone can expose an S3 endpoint via `rclone serve s3` to route to another protocol, eg sftp.
I mention it not to shill rsync.net, but to shill rclone, because when I discovered it I was even more impressed with it.
Obviously having to run a command and apply some amount of plumbing is different to a service just providing that API at the outset so the applicability for users will differ but still, rclone is very cool!
Happy to email you, if that's better, but is this because of unsustainable competition in the space or the tremendous volatility in consumption that object storage customers bring to the table?
I ask because in this current market, I would imagine investing in storage infrastructure is painful, but then I wonder, you are still in the storage infrastructure space anyways, so it likely has to do with the user behavior or user expectations or both.
rsync.net and rclone are great, my brain understood restic easier than borg for local backups over usb (ymmv), and plain old `rsync --archive` is most excellent wrt preserving file mod times and the like.
There is 100% a difference between "dead data" (eg: movie.mp4) and "live data" (eg: a git directory with `chmod` attributes)- S3 and similar often don't preserve "attributes and metadata" without a special secondary pass, even though the `md5` might be the same.
You can do restic+rsync.net too if you didn't know. `rclone serve restic --stdio` on their end exposes the restic REST server over stdio, then with some fiddling of restic, telling it to use rclone as the repo address, and configuring its rclone args to actually invoke `ssh rsync.net`.
Again, with regards to my other comments, I am not affiliated with rsync, just rclone is cool. You can use the same trick with any host with ssh and rclone installed.
I have used Arq for way over a decade. It does incremental encrypted backups and supports a lot of storage providers. Also supports S3 object lock (to protect against ransomware). It’s awesome!
(Arq developer here) By default Arq tries to be unobtrusive. Edit your backup plan and slide the “CPU usage” slider all the way to the right to make it go faster.
Restic to almost any cloud storage provider. It works perfectly, it is client-side encrypted with easily-configurable retention policies. I have been using it happily for many years (and also restored some files from the backups).
They are very similar in terms of ROM footprint (esp: 128K vs mqjs: 100K) and min RAM (esp: 8K vs mqjs: 10K), but spec coverage need to be examined in detail to see the actual difference.
That is what charity shops do in the UK. You don't make money as your stuff are donation, but the charity does when then resell. Works surprisingly well.
I have been trained in machine learning and started to notice a few early stage CTOs asking for advice on how to best start a ML project.
It seems many are tasked to implement an AI/ML roadmap for their start-up. But getting into ML make them experience a bit of an imposter syndrome, with all the jargon and technical sounding experts out-there.
They are all very capable individuals and I feel that this barrier to entry is unjustified with all the tools and models readily available. So, with some friends, we are trialing a 1.5h workshop to cover a few basics and demystify what ML is and what it can do. See https://lu.ma/06ec3e24
We designed this workshop to be as impactful as possible. If you are experienced software engineer getting into ML, consider joining us.
If initial feedback is good, we are thinking of working toward a 6-week "ML cohort-based masterclass" to help CTOs define a 6-18 month ML roadmap. This would include 1-on-1 sessions with a ML experts and access to curated exercises to do with the cohort.
reply