Why would anyone use a single passphrase for several datasets?
I generally do not endorse reusing passwords, but when setting up a ZFS filesystem for home use, it is not unheard of to use a single (secure) passphrase across several of those. That's because the reason behind splitting up pools into multiple datasets is the granular control over their properties and not the resulting fun of having to type in multiple passphrases.
Obviously, the more usual way of going about it would be to use an encryption root for all descending datasets, but this method may not play along with your topology. You could also use a keystore, or simply protect one dataset with a passphrase and unlock all the others with the keys stored within. This method has its downsides however and for home use, I do feel more confident about having everything I need to unlock my data with inside of my head.
Script it
So I wrote a simple bash script which asks for a passphrase and proceeds to unlock all (or optionally specified) datasets and optionally mounts them. You can find it in this repo: github.com/gbytedev/zfs-multi-mount.
The script will ask for a passphrase and go dataset by dataset attempting to load the keys. As soon as it encounters a dataset the key for which cannot be loaded, it prompts for a new passphrase.
Examples of usage
Load keys of all datasets and mount them
zfs-multi-mount.sh
Load keys for specific datasets and mount them
zfs-multi-mount.sh pool/dataset1 pool/dataset2 pool/dataset3
Load keys without mounting the datasets
zfs-multi-mount.sh --no-mount
Unlocking ZFS datasets during boot time using a systemd service
Data that is needed during system start needs to be decrypted during the boot process. An example of this is an encrypted home directory.
The script called by the following service will have to live in some unencrypted location. In this example, it is /opt/scripts.
Create the systemd service file
/etc/systemd/system/zfs-load-key.service
[Unit] Description=Import keys for all datasets DefaultDependencies=no Before=zfs-mount.service Before=systemd-user-sessions.service After=zfs-import.target OnFailure=emergency.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/opt/scripts/zfs-multi-mount.sh --systemd --no-mount [Install] WantedBy=zfs-mount.service
The [Unit] section tells the service to be run after ZFS pools are imported and before the datasets are mounted. If the script errors out (because of too many authentication attempts, or because the datasets cannot be found), the user will be dropped to an emergency shell.
In [Service] the service is instructed to run the script, which in turn is told to only load keys for datasets instead of mounting them (--no-mount|-n as this is taken care of by a ZFS service) and that it is being run within the systemd context (--systemd|-s).
Enabling the new service
Now all that is left is to enable the service:
systemctl enable zfs-load-key.service
You can find the script in this repo: github.com/gbytedev/zfs-multi-mount. Let me know if you find it useful and how you would improve it.
Feel free to look at my other ZFS bash script that takes care of snapshotting, replication and rotation: zfsbud.
Add new comment