Project

General

Profile

Bug #10206

zpool labelclear -f <device> not working as expected

Added by Lou Picciano 9 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Start date:
2019-01-09
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
needs-triage

Description

Having some rpool issues - am trying to replace drives as safely/conservatively as possible from root pool, which has started exhibiting spurious write errors. rpool is a simple 2-disk mirror. disk0 is the likely culprit, though the system runs AOK through all this, and reboots satisfactorily. (It's also interesting that zpool status never sees these errors, and zpool scrub progresses without error - though very slowly)

First test: Wanted to be sure each of the pool's drives was independently bootable before replacing disk0. As disk1 was not bootable - see installboot below - I removed disk1; replaced it. Replacement is now re-silvering (again, this will take days at current estimates). I did not detach disk1 from zpool first; one realizes this transgression...

Perhaps useful add'l info: is that this drive had not been optimally partitioned for use on a zpool - its partitions were set up for our datapool. For example, the installboot cmd installed stage1 to slice 1:

be_do_installboot_walk: child 0 of 2 device c2t0d0s0
Command: "/usr/sbin/installboot -F -m -f //boot/pmbr //boot/gptzfsboot /dev/rdsk/c2t0d0s0"
Output:
bootblock written for /dev/rdsk/c2t0d0s0, 329 sectors starting at 1024 (abs 1280)
stage1 written to slice 0 sector 0 (abs 256)
stage1 written to master boot sector
be_do_installboot_walk: child 1 of 2 device c2t1d0s0
Command: "/usr/sbin/installboot -F -m -f //boot/pmbr //boot/gptzfsboot /dev/rdsk/c2t1d0s0"
Output:
bootblock written for /dev/rdsk/c2t1d0s0, 329 sectors starting at 1024 (abs 129544)
stage1 written to slice 1 sector 0 (abs 64260)
stage1 written to master boot sector

Perhaps as a result(?), this half of the mirror was not independently bootable. Objective here was to re-label and repartition it - and resilver it into the rpool mirror.

(Have since resorted to dd erasing the drive as the only way forward?)

So, not sure if I'm reporting a bug here, or if zpool labelclear is simply - subtly? - responding to an underlying fundamental error state on the drive.

Also available in: Atom PDF