Project

General

Profile

Actions

Bug #13795

open

Encrypted zfs is not compatible with openzfs

Added by jing zhang about 2 years ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
zfs - Zettabyte File System
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
Gerrit CR:
External Bug:

Description

Encrypted zfs which is created by illumos-gate cannot be imported by Openzfs in linux.

Steps:

In Hipster
1.zpool create apool
2.zfs create -o encryption=aes-128-ccm -o keyformat=passphrase apool/aes
3.Write some files.

In Linux
4.zpool import -l apool, and input the correct password.
5.I got an EIO error. And the zfs 'aes' has no file.

This is because Openzfs used 'os_projectused_dnode' to calc hmacs, while illumos not.
So the 'local_mac' values are not equal after 'zio_crypt_do_objset_hmacs'.

Reference: Project dnode should be protected by local MAC
https://github.com/openzfs/zfs/commit/7b30ee6bafe91ebd3b34433ef3b943fd07a98cea#diff-b55cbc66d9f61ed47a13d41dfab8c2b0e078813a2d0aa5da9008356324833018

diff --git a/usr/src/uts/common/fs/zfs/zio_crypt.c b/usr/src/uts/common/fs/zfs/zio_crypt.c
index 47c104e642..9aa42b33ff 100644
--- a/usr/src/uts/common/fs/zfs/zio_crypt.c
+++ b/usr/src/uts/common/fs/zfs/zio_crypt.c
@@ -1298,15 +1298,27 @@ zio_crypt_do_objset_hmacs(zio_crypt_key_t *key, void *data, uint_t datalen,
        }

        /* add in fields from the user accounting dnodes */
-       ret = zio_crypt_do_dnode_hmac_updates(ctx, key->zk_version,
-           should_bswap, &osp->os_userused_dnode);
-       if (ret)
-               goto error;
+       if (osp->os_userused_dnode.dn_type != DMU_OT_NONE) {
+               ret = zio_crypt_do_dnode_hmac_updates(ctx, key->zk_version,
+                   should_bswap, &osp->os_userused_dnode);
+               if (ret)
+                       goto error;
+       }

-       ret = zio_crypt_do_dnode_hmac_updates(ctx, key->zk_version,
-           should_bswap, &osp->os_groupused_dnode);
-       if (ret)
-               goto error;
+       if (osp->os_groupused_dnode.dn_type != DMU_OT_NONE) {
+               ret = zio_crypt_do_dnode_hmac_updates(ctx, key->zk_version,
+                   should_bswap, &osp->os_groupused_dnode);
+               if (ret)
+                       goto error;
+       }
+
+       if (osp->os_projectused_dnode.dn_type != DMU_OT_NONE &&
+           datalen >= OBJSET_PHYS_SIZE_V3) {
+               ret = zio_crypt_do_dnode_hmac_updates(ctx, key->zk_version,
+                   should_bswap, &osp->os_projectused_dnode);
+               if (ret)
+                       goto error;
+       }

        /* store the final digest in a temporary buffer and copy what we need */
        cd.cd_length = SHA512_DIGEST_LENGTH;

Actions #1

Updated by Jason King almost 2 years ago

While the diff applies and build cleanly, unfortunately, any encrypted datasets w/ zfs projects created prior to applying the patch will also not import (failing w/ EIO).

It seems likely we'll need a way to indicate on disk if the project dnodes should be included or not (depending on when the dataset was created).

Actions #2

Updated by Jason King over 1 year ago

What I'm working on validating/testing is an approach where we 'upgrade' the dataset to explicitly indicate which hash (the old/errata hash or the correct hash that includes all three nodes). On datasets not upgraded, we accept either hash on read, and write the old/errata hash (for OpenZFS, I'll propose writing out the correct hash -- basically preserving the current status quo).

For upgrading, a bit in objset_phys_t->os_flags will indicate if the upgrade has been done. When 'upgraded', another bit in os_flags (there's 61 bits currently available, plus several bytes of reserved/unused space in the current objset_phys_t, so I don't think scarcity is a concern) will indicate which hash is being used. On upgraded datasets, only the indicated hash is accepted. The upgrade process will not change which hash is used, but just record which was being used prior to the upgrade.

A dataset property will be created to allow manipulation of the hash used between the two. This should allow operators to update to the interoperable version at their convenience. The initial value of the property will reflect the current hash being used. This will also allow OpenZFS to be interoperable with illumos distress that won't (yet) have this fix as well.

Additionally, we may wish to create an SPA errata for any zpool when we encounter non-upgraded datasets w/ the legacy hash. That way we can include a message to the operator to inform them there are datasets on the zpool where they may wish (at a suitable time) change the hash.

Actions #3

Updated by Jason King over 1 year ago

A slight change in the proposed solution:

We'll use two bits in os_flags. The first bit indicates if the dataset's been 'upgraded' -- when set, it means that a second bit in os_flags controls which hash to use for read/write. The 'upgrade' process just detects which hash is being used pre-upgrade, and then sets the second bit to reflect the current hash being used (the intention is that the same fix will go into OpenZFS) -- it should be a very quick check when a dataset is first opened after this change. The bit used to control which hash is used is then managed by a dataset property.

On systems without this proposed change, they just ignore these objset_phys_t flags, and will use whatever hash they've historically used. The net result should be that after this change, everything still works as it did prior, except that now there is an option for an operator to switch the hash used which they can do at their convenience (or not at all). Datasets using the same hash value across different platforms will then be usable.

Actions

Also available in: Atom PDF