

> duplicacy is also free and open source for just the CLI version Oh, and the default archive object size is comically small for modern day usage and will cause significant issues if you're not using object storage (say, a remote directory.) After just a few backups of a system with several hundred GB, you could end up with a "cripples standard linux filesystem tools" numbers of files in a single directory.Īnd of course, there's no way to switch or migrate object sizes. On spinning rust, it significantly impacts performance.

The duplicati team has exactly zero appreciation for the fact that backup software should be like filesystems: the most stable, reliable, predictable piece of software your computer runs.Īlso, SSD users should be aware that Duplicati assembles each archive object on the local filesystem. In general there seemed to be a lot of local database issues where it could become corrupted, you'd have no idea, and worse, a lot of situations seemed to be unrecoverable - even doing a rebuild based off the 'remote' archive would error out or otherwise not work. Any backup software that allows such a glaring bug to persist for months if not years has completely lost my trust. I don't know if it's still a thing I don't care. "If you interrupt the initial backup, your archive is corrupted, but silently, so you'll do months of backups, maybe even rely upon having those backups" was what made me throw up my hands in disgust. Throughout its history, it's had a lot of weird, fatal problems that the dev team has shown little interest in tracking down while there is endless interest in chasing yet another storage provider or other shiny things.ĭuplicati has been in desperate need of an extended feature freeze and someone to comb through the forums and github issues looking for critical archive-destroying or corrupting bugs.

I strongly advise people to not rely on Duplicati.

Hetzner storage box and BorgBase seem good for that too. Also good target for Borg at same price but with 100GB minimum yearly commitment. If you need large space, gives you zfs target with $0.015/GB but with 4TB minimum commitment. Ubuntu gets zfs running easily by simply installing zfsutils-linux package. You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run. They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago. I've looked around quite a bit too but did you actually use restic and duplicacy?
