Project

General

Profile

Actions

Bug #13733

open

zpool import not possible after slot change - disk "unavailable"

Added by Stephan Althaus over 2 years ago. Updated over 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
Gerrit CR:
External Bug:

Description

Hello!

I have a 1-disk pool which i use only from time to time.

Normally i would expect that i can import the pool after installing the disk again, (on 'cold' machine, boot with --reconfigure) the slot is not relevant.

Just to check that disk and zfs label are ok, i mounted the pool on a linux machine, everything is ok there.

On OI i cant import the pool, the "-f" flag does not help.

Details below.

Btw, just to be clear, I had this same error before i tried to import the pool with linux, so the import with linux is not the origin of the problem here.

Any hints on how to dig into this further are welcome!

Thanks!

Stephan


  1. zpool import
    Password:
    pool: bkp1t
    id: 4466948378057274312
    state: FAULTED
    status: The pool was last accessed by another system.
    action: The pool cannot be imported due to damaged devices or data.
    The pool may be active on another system, but can be imported using
    the '-f' flag.
    see: http://illumos.org/msg/ZFS-8000-EY
    config:

    bkp1t FAULTED corrupted data
    c8t2d0s0 UNAVAIL corrupted data

  1. zdb l /dev/rdsk/c8t2d0s0
    -----------------------------------

    LABEL 0
    ------------------------------------
    version: 5000
    name: 'bkp1t'
    state: 0
    txg: 26349
    pool_guid: 4466948378057274312
    errata: 0
    hostid: 758768731
    hostname: 'Fuji'
    top_guid: 15764649591111927753
    guid: 15764649591111927753
    vdev_children: 1
    vdev_tree:
    type: 'disk'
    id: 0
    guid: 15764649591111927753
    path: '/dev/sda1'
    devid: 'id1,sd@n50000396b5803dd2/a'
    phys_path: '/pci@0,0/pci8086,c01@1/pci1734,11e4@0/sd@4,1:a'
    whole_disk: 0
    metaslab_array: 256
    metaslab_shift: 33
    ashift: 13
    asize: 987837759488
    is_log: 0
    create_txg: 4
    features_for_read:
    com.delphix:hole_birth
    com.delphix:embedded_data
    labels = 0 1 2 3
Actions #1

Updated by Stephan Althaus over 2 years ago

The error occurs only on the buildtin AHCI SATA interface.

Now i have tested 2 slots that are on a HBA on an LSI based Raid controller in JOBD mode.
There i could import the pool, the disk is available.

I did not import the pool to keep the current state for further testing.

After connecting the disk back to a 'normal' AHCI port, the import is not possible as statet in the original post.

weird, no?

  1. /usr/lib/pci/pcieadm show-devs
    BDF TYPE DRIVER DEVICE
    0/0/0 PCI -- Xeon E3-1200 v3 Processor DRAM Controller
    0/1/0 PCIe Gen 3x8 pcieb3 Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller
    1/0/0 PCIe Gen 3x8 mr_sas1 MegaRAID SAS 2208
    ...
$ sudo zpool import
Password:
pool: bkp1t
id: 4466948378057274312
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
config:
  1. zdb l /dev/rdsk/c10t2d1s0
    -----------------------------------

    LABEL 0
    ------------------------------------
    version: 5000
    name: 'bkp1t'
    state: 0
    txg: 26349
    pool_guid: 4466948378057274312
    errata: 0
    hostid: 758768731
    hostname: 'Fuji'
    top_guid: 15764649591111927753
    guid: 15764649591111927753
    vdev_children: 1
    vdev_tree:
    type: 'disk'
    id: 0
    guid: 15764649591111927753
    path: '/dev/sda1'
    devid: 'id1,sd@n50000396b5803dd2/a'
    phys_path: '/pci@0,0/pci8086,c01@1/pci1734,11e4@0/sd@4,1:a'
    whole_disk: 0
    metaslab_array: 256
    metaslab_shift: 33
    ashift: 13
    asize: 987837759488
    is_log: 0
    create_txg: 4
    features_for_read:
    com.delphix:hole_birth
    com.delphix:embedded_data
    labels = 0 1 2 3
  2. init 5 && exit
    updating /platform/i86pc/amd64/boot_archive (CPIO)
    logout
    Connection to fuji closed.
    steven@dell6510:~$ ssh -YXC fuji
    The illumos Project illumos-6dcbfae4aa April 2021
    You have new mail.
  3. zpool import
    Password:
    pool: bkp1t
    id: 4466948378057274312
    state: ONLINE
    status: The pool was last accessed by another system.
    action: The pool can be imported using its name or numeric identifier and
    the '-f' flag.
    see: http://illumos.org/msg/ZFS-8000-EY
    config:
  4. zdb l /dev/rdsk/c10t15d1s0
    -----------------------------------

    LABEL 0
    ------------------------------------
    version: 5000
    name: 'bkp1t'
    state: 0
    txg: 26349
    pool_guid: 4466948378057274312
    errata: 0
    hostid: 758768731
    hostname: 'Fuji'
    top_guid: 15764649591111927753
    guid: 15764649591111927753
    vdev_children: 1
    vdev_tree:
    type: 'disk'
    id: 0
    guid: 15764649591111927753
    path: '/dev/sda1'
    devid: 'id1,sd@n50000396b5803dd2/a'
    phys_path: '/pci@0,0/pci8086,c01@1/pci1734,11e4@0/sd@4,1:a'
    whole_disk: 0
    metaslab_array: 256
    metaslab_shift: 33
    ashift: 13
    asize: 987837759488
    is_log: 0
    create_txg: 4
    features_for_read:
    com.delphix:hole_birth
    com.delphix:embedded_data
    labels = 0 1 2 3
bkp1t         ONLINE
c10t15d1s0 ONLINE
Actions #2

Updated by Stephan Althaus over 2 years ago

After import of the pool in the HBA slot, export,
and re-test of the import in the ahci slot, the issue is gone.

Another (different) thing is, if we swap the disks of 2 pools the import shows errors.
"zfs import" should try harder to identify the 'right' disks.

Actions

Also available in: Atom PDF