Bug #12894
closedroot zpool with SLOG should be bootable
100%
Description
After installing OmniOSce on a mirrored zpool built from two spindles, I attempted to add a a SLOG device to the pool. I was met with this error upon doing so:
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs
Tracing that error through libzfs lead me to zfs_ioc_vdev_add()
which apparently enforces certain rules about vdevs added to a root pool. This is likely for the benefit of older bootloaders which were incapable of locating the members of pools with all but the most simple of topologies. Looking at ZoL, they ripped out that limitation a long time ago: #1631 Remove the slog restriction on bootfs pools Confident that loader
was capable of handling a pool with a SLOG in it (as it can handle cases like a raidz2 root pool), I applied that patch to illumos and rebooted onto the result bits. I was able to add the log device to the pool as expected.
With that tested, I attempted to use `beadm activate` to switch back to my stock OmniOS BE. I was met with a new error:
set_bootfs: failed to set bootfs property for pool rpool: operation not supported on this type of pool be_activate: failed to set bootfs pool property for rpool/ROOT/omnios-r151034h Unable to activate omnios-r151034h. ZFS returned an error.
After more debugging, I traced my way to vdev_is_bootable()
whose negative return was causing the zpool set bootfs=<be>
operation to fail. (Confirmed with a subsequent test to perform that bootfs property assignment manually). Indeed, vdev_is_bootable()
has similar checks to see if the pool is made up of a single top-level vdev. Again, ZoL had loosened that restriction a long time ago. In #317 Allow setting bootfs on any pool where they short-circuited the check to always pass (on non-illumos). This was followed by several updates to make it less reckless in the face of features later added to ZFS, ultimately landing on an implementation fairly similar to illumos, but without the root pool restriction:
https://github.com/openzfs/zfs/blob/ae7b167a98b48260554fd88cef022547283e125f/module/zfs/vdev.c#L4613-L4630
Like the restriction on vdev-add, this seems excessive in with modern bootloaders which can handle such pools.
Related issues
Updated by Patrick Mooney over 3 years ago
- Related to Bug #12901: loader: can not read zfs pool with slog removed added
Updated by Patrick Mooney over 3 years ago
Without the fix to Bug #12901: loader: can not read zfs pool with slog removed, care should be taken not to remove a SLOG device from such a pool, lest it become hopelessly unbootable.
Updated by Patrick Mooney over 3 years ago
With the #12901 fix in place, and the system bootable again, I check adding/removing a slog from my mirrored rpool. Notably, it did not reuse the VDEV_HOLE instance which caused the #12901 problems. After removing the device for the second time, now two hole entries were present in the pool topology (checked with zdb). With all these devices (holes and slog) in the topology along with the mirror, I was still able to manipulate the bootfs
property on the pool without issue.
Updated by Electric Monk about 3 years ago
- Status changed from In Progress to Closed
- % Done changed from 0 to 100
git commit 5623f66bff7c40c0d22bd5f3cbbb803965de64b9
commit 5623f66bff7c40c0d22bd5f3cbbb803965de64b9 Author: Patrick Mooney <pmooney@pfmooney.com> Date: 2020-07-11T18:20:19.000Z 12894 root zpool with SLOG should be bootable Reviewed by: Toomas Soome <tsoome@me.com> Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Andy Fiddaman <andy@omniosce.org> Reviewed by: Andrew Stormont <andyjstormont@gmail.com> Approved by: Dan McDonald <danmcd@joyent.com>
Updated by Patrick Mooney about 3 years ago
- Related to Bug #12981: vdev_hold prone to NULL dereference added
Updated by Jason King about 3 years ago
- Related to Bug #13020: bootfs_006_pos zfs test needs to be updated added