Want to skip this post and go straight to the repository? Bye.
No filesystem for cavemen
The features of a ZFS filesystem including pooled storage, copy-on-write, snapshots, data integrity tools, compression and encryption truly elevate your backup solution of choice. Especially after years of using EXT4/NTFS/APFS like a caveman.
For the purists stepping out of that cave, software like TrueNAS will pose too big of an overhead, so they will hack together solutions specifically tailored to their needs.
zfsbud
(I realize it's corny, but you got to name that github repository somehow...)
As I recently went through that phase, let me save you some time by introducing a bash script I wrote that caters to all my ZFS needs. Its aim is to simplify common ZFS tasks by figuring out the boring stuff, while being verbose about the process and potential problems.
The features include:
- Creation of a snapshot of one or multiple datasets
- Rotation (selective deletion) of older snapshots of one or multiple datasets while keeping 8 daily, 5 weekly, 13 monthly and 6 yearly snapshots
- Replication of one or multiple datasets to a local or remote location through ZFS send
- Smart/automatic handling of initial & consecutive send operations
- Works with encrypted datasets
- Dry run mode for output testing without actual changes being made
- Optional logging
A practical example of usage
If you know your way around ZFS, go ahead and look at the code and code examples in this repo: github.com/gbytedev/zfsbud. Otherwise stick around and check out this simple example:
Example setup
Local machine
- local_pool
- dataset1
- dataset2
Remote machine
- remote_pool
Initial sending of two datasets to the remote
zfsbud.sh --send remote_pool --initial --rsh "ssh user@server -p22" local_pool/dataset1 local_pool/dataset2
Now the remote pool received both datasets along with all snapshots.
- If the destination is local, remove --rsh.
- If you wish to create a fresh snapshot before sending, add --create|-c.
Consecutive sending of a dataset to the remote
zfsbud.sh --send remote_pool --rsh "ssh user@server -p22" local_pool/dataset1
This command is basically the previous command without the --initial flag. The script will figure out the most recent common snapshot between the source and destination and will send all changes up to the newest source snapshot.
- If you wish to create a fresh snapshot before sending, add --create|-c.
- If you wish to rotate (selectively remove) old snapshots created with zfsbud, add --remove-old|-r.
- If the destination is local, remove --rsh.
Create a snapshot of two datasets, rotate old snapshots and backup to remote
zfsbud.sh -c -r -s remote_pool -e "ssh user@server -p22" local_pool/dataset1 local_pool/dataset2
This rotates snapshots by creating a new snapshot for each dataset and selectively removing old snapshots; the changes are then sent to the remote. This wold be a good command to put into a cron task running once a day.
- In addition to the newest snapshot, the most recent common snapshot plus 8 daily, 5 weekly, 13 monthly and 6 yearly snapshots are kept.
- Only snapshots made by this script are deleted; the snapshot prefix defining what is a zfsbud snapshot can be overridden with --snapshot-prefix|-p <cute_name_>.
Dry run & logging
--dry-run|-d provides calculated/assumed output information without making actual changes. This is highly recommended before a new utilization of this script.
--log|-l tells the script to log to the home directory, while --log-path|-L <path/to/file> allows defining the file path.
You can find the script in this repo: github.com/gbytedev/zfsbud. Let me know if you find it useful and how you would improve it.
Comments
After reviewing it and using it in production, I must say this seems to be the most useful ZFS send and backup rotating shell script out there. Any plans on implementing customized rotation periods on top what is implemented by default? Thanks for contributing this!
Really glad you found it so useful! Yes, customizing rotation intervals is documented in the todo list already. If you have any more ideas, make sure to convert them into github tickets.
Really neat! Works as advertised. I couldn't find a way to resume partial transfers though?
The resume functionality is already included in the master branch and will be included in the next release.
looks promising ... thanks for that
would it be too much to ask to get it converted to use it within unraid (slackware) ?
I am a novice and failed to get it running
This script works great. Does it clean up the destination snapshots per the conf file as well? I didnt see many notes on that. Otherwise I am digging this. Thank you for your hard work!
@T3CCH I'm Happy you enjoy it!
To clean the destination snapshots, you'd have to run `zfsbud -r` on the destination. You'd have to be careful however not to delete the last common snapshot. Maybe it's a feature request worth looking into - feel free to create a feature request on github.
Neuen Kommentar hinzufügen