Feature #1748


desire support for reguid in zfs

Added by Garrett D'Amore about 12 years ago. Updated about 12 years ago.

zfs - Zettabyte File System
Start date:
Due date:
% Done:


Estimated time:
Gerrit CR:
External Bug:


There are times when it would be nice to change the GUID of a pool in ZFS. This happens when we want to use underlying technology to "dd" a disk (e.g. for installation from a golden master image, or when we want to import a clone of a pool on the same system that already has the original imported.)

We propose therefore a new command, "zpool reguid <pool>", which will change the GUID of a pool.

For reasons relating to consistency (and the problem of dealing with failures otherwise), we will only support this on pools which are healthy and online. So you can reguid rpool, but you won't be able to reguid a pool that is exported. This may seem a bit backwards, but it allows us to achieve the functionality with the least impact on the overall code base, without introducing any difficult correctness problems if some vdevs are not available at the time we reguid the pool.

Actions #1

Updated by Mark Musante about 12 years ago

The arc caches data based on the spa guid, and "aux" devices (spare & l2arc) use the guid to determine pool membership. There is also some interaction with FMA. This makes re-guid-ing a live pool somewhat problematic, so my preference would be for an import option instead. e.g. "zpool import -g tank".

Actions #2

Updated by Garrett D'Amore about 12 years ago

We are well aware of the use of GUID in the ARC. To fix this, George and I came up with a new ephemeral GUID that only lives in kernel memory, but is created for the purposes of the ARC at import time. There are changes to the ARC for this. I have diffs for this already, and it is tested and working well. The first version of the webrev is up at - there are some updates to this that I will post shortly (thanks to updates from George), but fundamentally the reguid logic including the updates to the ARC code are there. No problems at all. :-)

For the record, we started thinking of the reguid at import time, but the problem with this approach really was how to handle the situation where one or more vdevs are unavailable. We can import a pool that is missing vdevs just fine (e.g. half of a mirror), but until we find ourselves in a chicken and egg situation dealing with those offline vdevs at the time we would want to reguid. Its also simpler to do the reguid after the pool is imported, as then we can make use of "atomicity" of ZFS updates to change the GUID, whereas to do it at import time we would have to add a lot of logic to do this properly before we had the pool imported.

So, while its counter-intuitive to reguid the online pool, it turns out to be far far less problematic. The ARC concern you raised turns out to be fairly trivial to address as you can see from my diffs.

Actions #3

Updated by Garrett D'Amore about 12 years ago

Oh, wrt aux devices and guid use, George and I analyzed those code paths -- and it appears that they make use of functions to inquire guid under control of the appropriate locks -- so when we change the guid those code paths behave properly and see the guid. In particular, it doesn't appear that they "cache" the guid locally, unlike the ARC. So we're pretty sure we got this all right. :-)

Actions #4

Updated by Garrett D'Amore about 12 years ago

Finally, wrt FMA... while we don't know what Oracle's code does, the current FMA code appears to be unaffected by these changes as well. A conversation with Eric about this and our own grok'ing of the code seemed to agree on this point. In particular, it makes use of spa_guid() (just as the aux device code does), so it will see the new guid. If a fault ensues while a guid change is in flight, there is a race there, but that's to be expected. (We expected pool reguids to be very very rare and only done with administrative action.)

There is one other tidbit here, which is for correlation of guids outside of the kernel, we have added a sysevent so that observers can notice the guid change. Hopefully nobody is depending too much on the guid though.

Actions #5

Updated by Garrett D'Amore about 12 years ago

  • Status changed from New to Resolved
  • % Done changed from 90 to 100
  • Tags deleted (needs-triage)

Fixed in:

changeset: 13514:417c34452f03
tag: tip
user: Garrett D'Amore <>
date: Fri Nov 11 14:07:54 2011 -0800
1748 desire support for reguid in zfs
Reviewed by: George Wilson <>
Reviewed by: Igor Kozhukhov <>
Reviewed by: Alexander Eremin <>
Reviewed by: Alexander Stetsenko <>
Approved by: Richard Lowe <>

Actions #6

Updated by Mark Musante about 12 years ago

Excellent, thanks for the detailed explanation Garrett. Glad it works in online mode. I think it can work for the import case as well: in spa_load_impl() wait until the pool is loaded and only reguid if all devices are healthy.

Actions #7

Updated by Garrett D'Amore about 12 years ago

True, but in order to load the pool, you need to have non-conflicting GUIDs, which would lead to a chicken-and-egg problem for some of the use cases we wanted for this for.


Also available in: Atom PDF