Project

General

Profile

Actions

Bug #14978

closed

ZFS autoexpand property should work for root pools

Added by Joshua M. Clulow 3 months ago. Updated about 1 month ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
zfs - Zettabyte File System
Start date:
Due date:
% Done:

100%

Estimated time:
Difficulty:
Medium
Tags:
Gerrit CR:
External Bug:

Description

When the autoexpand property is set on a pool, and the underlying block device has grown in size at import time, we kick off an async task, SPA_ASYNC_AUTOEXPAND, in spa_import(). This task expands the pool to fill the larger size of the block device automatically.

For root pools, we end up going through spa_import_rootpool() instead of spa_import(), where there is no analogous operation. It seems like we should also schedule SPA_ASYNC_AUTOEXPAND during root import as well.

Actions #1

Updated by Electric Monk 3 months ago

  • Gerrit CR set to 2355
Actions #2

Updated by Joshua M. Clulow 3 months ago

Note that autoexpansion came in with:

commit 573ca77e53dd31dcaebef023e7eb41969e6896c1
Author: George Wilson <George.Wilson@Sun.COM>
Date:   Mon Jun 8 10:35:50 2009 -0700

    PSARC 2008/353 zpool autoexpand property
    6475340 when lun expands, zfs should expand too
    6563887 in-place replacement allows for smaller devices
    6606879 should be able to grow pool without a reboot or export/import
    6844090 zfs should be able to mirror to a smaller disk

References:

None of these references appear to mention intentionally excluding automatic expansion for root pools.

Actions #3

Updated by Joshua M. Clulow about 1 month ago

Test 1: pre-installed cloud-style image, no autoexpansion

Used the illumos image-builder tool to create a pre-installed cloud style disk image. The image is ~1.5GB, and the VM disk it was written to is larger. Left autoexpand disabled on that pool.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: Performing full ZFS device scan!
NOTICE: Original /devices path (/pseudo/lofi@1:b) not available; ZFS is trying an alternate path (/pci@0,0/pci1af4,2@5/blkdev@0,0:b)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.12G   645M   507M        -     9.75G     3%    56%  1.00x    ONLINE  -

As expected, there was no automatic expansion.

Test 2: pre-installed cloud-style image, with autoexpansion

Same image construction process, this time with autoexpand enabled in the image.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: Performing full ZFS device scan!
NOTICE: Original /devices path (/pseudo/lofi@1:b) not available; ZFS is trying an alternate path (/pci@0,0/pci1af4,2@5/blkdev@0,0:b)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
Nov  7 19:54:47 unknown login: ROOT LOGIN /dev/console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  10.9G   645M  10.2G        -         -     0%     5%  1.00x    ONLINE  -

As expected, the pool was automatically expanded by the time we were able to see it.

Test 3: ISO installer to create pool in-situ

Created a VM with a 2G disk. Booted from a read-only ISO with a basic OS installer. Installed the pool, leaving autoexpand disabled.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
Nov  7 20:39:50 unknown login: ROOT LOGIN /dev/console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        0.49 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@unknown:~# install-helios testing c2t0d0
locating ISO...
unknown_fstyp (no matches)
NODENAME: testing
POOL LAYOUT: c2t0d0
+ zpool create -f -O compression=on -R /altroot -B rpool c2t0d0
+ zfs create -o canmount=off -o mountpoint=legacy rpool/ROOT
+ zfs create -o canmount=noauto -o mountpoint=legacy rpool/ROOT/helios
+ mount -F zfs rpool/ROOT/helios /a
...
should be ok to reboot now
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   642M  1022M        -      256M     0%    38%  1.00x    ONLINE  /altroot

Removed the CD image from the VM, leaving the mutable disk as-is. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: testing

testing console login: root
Password:
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   667M   997M        -         -     1%    40%  1.00x    ONLINE  -
root@testing:~# poweroff

Expanded the mutable disk to have an extra 10GB. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Hostname: testing

testing console login: root
Password:
Last login: Mon Nov  7 20:42:23 on console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device       11.72 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   653M  1011M        -     9.75G     1%    39%  1.00x    ONLINE  -
root@testing:~# zpool online -e rpool c1t0d0
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  11.4G   648M  10.7G        -         -     0%     5%  1.00x    ONLINE  -

As expected, no automatic expansion. Was able to expand manually.

Test 4: ISO installer to create pool in-situ, using autoexpand

Created a VM with a 2G disk. Booted from a read-only ISO with a basic OS installer. Installed the pool, enabling autoexpand.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
Nov  7 20:45:06 unknown login: ROOT LOGIN /dev/console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        0.49 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@unknown:~# install-helios testing c2t0d0
locating ISO...
unknown_fstyp (no matches)
NODENAME: testing
POOL LAYOUT: c2t0d0
+ zpool create -f -O compression=on -R /altroot -B rpool c2t0d0
+ zfs create -o canmount=off -o mountpoint=legacy rpool/ROOT
+ zfs create -o canmount=noauto -o mountpoint=legacy rpool/ROOT/helios
+ mount -F zfs rpool/ROOT/helios /a
...
should be ok to reboot now
root@unknown:~# zpool set autoexpand=on rpool
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   642M  1022M        -      256M     0%    38%  1.00x    ONLINE  /altroot

Removed the CD image from the VM, leaving the mutable disk as-is. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: testing

testing console login: root
Password:
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   645M  1019M        -         -     0%    38%  1.00x    ONLINE  -
root@testing:~# poweroff

Expanded the mutable disk to have an extra 10GB. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Hostname: testing

testing console login: root
Password:
Last login: Mon Nov  7 20:47:20 on console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device       11.72 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  11.4G   647M  10.7G        -         -     0%     5%  1.00x    ONLINE  -

As expected, the pool expanded automatically by the time we were able to see it.

Test 5: ISO installer to create mirrored pool in-situ, using autoexpand

Created a VM with two 2G disks. Booted from a read-only ISO with a basic OS installer. Installed the pool, enabling autoexpand.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
Nov  7 20:52:00 unknown login: ROOT LOGIN /dev/console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        0.49 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c3t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@unknown:~# install-helios testing c2t0d0 c3t0d0
locating ISO...
unknown_fstyp (no matches)
unknown_fstyp (no matches)
NODENAME: testing
POOL LAYOUT: mirror c2t0d0 c3t0d0
+ zpool create -f -O compression=on -R /altroot -B rpool mirror c2t0d0 c3t0d0
+ zfs create -o canmount=off -o mountpoint=legacy rpool/ROOT
+ zfs create -o canmount=noauto -o mountpoint=legacy rpool/ROOT/helios
+ mount -F zfs rpool/ROOT/helios /a
...
should be ok to reboot now
root@unknown:~# zpool set autoexpand=on rpool
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   643M  1021M        -      256M     0%    38%  1.00x    ONLINE  /altroot
root@unknown:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0

errors: No known data errors

Removed the CD image from the VM, leaving the mutable disks as-is. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: testing

testing console login: root
Password:
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.62G   645M  1019M        -         -     0%    38%  1.00x    ONLINE  -

Expanded the mutable disks to each have an extra 10GB. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Hostname: testing

testing console login: root
Password:
Last login: Mon Nov  7 21:07:08 on console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device       11.72 GiB   no  no
-       c2t0d0                  Virtio   Block Device       11.72 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  11.4G   654M  10.7G        -         -     0%     5%  1.00x    ONLINE  -

As expected, both mirrored disks were able to expand, increasing the pool capacity.

Test 6: ISO installer to create raidz1 pool in-situ, using autoexpand

Created a VM with three 2G disks. Booted from a read-only ISO with a basic OS installer. Installed the pool, enabling autoexpand.

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: unknown

unknown console login: root
Password:
Nov  7 21:11:53 unknown login: ROOT LOGIN /dev/console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@unknown:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        0.49 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c3t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c4t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@unknown:~# install-helios testing c2t0d0 c3t0d0 c4t0d0
locating ISO...
unknown_fstyp (no matches)
unknown_fstyp (no matches)
unknown_fstyp (no matches)
NODENAME: testing
POOL LAYOUT: raidz1 c2t0d0 c3t0d0 c4t0d0
+ zpool create -f -O compression=on -R /altroot -B rpool raidz1 c2t0d0 c3t0d0 c4t0d0
+ zfs create -o canmount=off -o mountpoint=legacy rpool/ROOT
+ zfs create -o canmount=noauto -o mountpoint=legacy rpool/ROOT/helios
+ mount -F zfs rpool/ROOT/helios /a
...
should be ok to reboot now
root@unknown:~# zpool set autoexpand=on rpool
root@unknown:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@unknown:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool     5G   983M  4.04G        -      512M     0%    19%  1.00x    ONLINE  /altroot
root@unknown:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0
            c4t0d0  ONLINE       0     0     0

errors: No known data errors

Removed the CD image from the VM, leaving the mutable disks as-is. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Configuring devices.
Hostname: testing

testing console login: root
Password:
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c2t0d0                  Virtio   Block Device        1.95 GiB   no  no
-       c3t0d0                  Virtio   Block Device        1.95 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool     5G   987M  4.04G        -      512M     0%    19%  1.00x    ONLINE  -

Expanded the mutable disks to each have an extra 10GB. Rebooted:

Oxide Helios Version rti/14978-0-gd2b3517f7f 64-bit (onu)
NOTICE: hma_svm_init: CPU does not support SVM
Hostname: testing

testing console login: root
Password:
Last login: Mon Nov  7 21:15:31 on console
The illumos Project     rti/14978-0-gd2b3517f7f November 2022
root@testing:~# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
-       c1t0d0                  Virtio   Block Device       11.72 GiB   no  no
-       c2t0d0                  Virtio   Block Device       11.72 GiB   no  no
-       c3t0d0                  Virtio   Block Device       11.72 GiB   no  no
root@testing:~# zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local
root@testing:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool    34G   991M  33.0G        -      512M     0%     2%  1.00x    ONLINE  -

As expected, all three raidz1 disks were able to expand, increasing the pool capacity.

Actions #4

Updated by Joshua M. Clulow about 1 month ago

Some implementation notes:

  • For a regular cached or root pool with no /devices changes, though we have the zfs file system loaded we have not attached the zfs driver which provides the ioctl control nodes, etc. This means we have not yet populated the somewhat dubious zfs_dip global, which we were previously using to deliver the LUN expansion (DLE) sysevent, so we're now no longer using the DDI interface for that because it requires a dev_info_t; this is OK because we're also in the gate.
  • The DLE sysevent handler in user mode, which is how the expansion work is actually done, now performs the check for a stale /dev path which was previously not being done until somebody ran zpool status as a privileged user; this should not have any impact beyond what was already happening in the system and if anything should make the system a bit less confusing
Actions #5

Updated by Electric Monk about 1 month ago

  • Status changed from New to Closed
  • % Done changed from 0 to 100

git commit b4fb003914e70b41d96dec8011864f6af1faf3ef

commit  b4fb003914e70b41d96dec8011864f6af1faf3ef
Author: Joshua M. Clulow <josh@sysmgr.org>
Date:   2022-11-08T20:54:43.000Z

    14978 ZFS autoexpand property should work for root pools
    Reviewed by: Andy Fiddaman <illumos@fiddaman.net>
    Reviewed by: Toomas Soome <tsoome@me.com>
    Approved by: Dan McDonald <danmcd@mnx.io>

Actions

Also available in: Atom PDF