Project

General

Profile

Actions

Bug #15249

open

kern.notice vdev_disk_open: update devid from .. to ...

Added by David Stes 6 months ago. Updated 5 months ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
Gerrit CR:
External Bug:

Description

To reproduce: install OI 2020.04 on a single disk

pkg update to OI 2022.11

on reboot there are now messages in /var/adm/messages (and during boot)

zfs: kern.notice NOTICE: vdev_disk_open /dev/dsk/c4t0d0s1 : update devid from 'id1, blahblah' to 'id1, somethingelse'

are these messages harmful and what is the cause please ??

I checked . zpool upgrade -v

and tried to run

zpool upgrade rpool1

which reports : pool rpool1 already has all supported features enabled

I have no real problem so it seems the message can be ignored, but is there a cause / reason for these messages being logged ??


Related issues

Related to illumos gate - Bug #14745: ZFS should handle unknown/invalid vdev devids gracefullyClosedHans Rosenfeld

Actions
Related to illumos gate - Feature #14686: nvme should use namespace GUID for devid if availableClosedHans Rosenfeld

Actions
Related to illumos gate - Bug #10622: ZFS should still check paths for devices that have no devidClosedJoshua M. Clulow2019-03-31

Actions
Related to illumos gate - Bug #9683: Allow bypassing devid in vdev_disk_open()ClosedBrad Lewis2018-07-27

Actions
Related to illumos gate - Bug #12341: devid mismatch messages are too noisyIn ProgressYuri Pankov

Actions
Related to illumos gate - Bug #14816: Boot from NVMe scans all devices after 14688ClosedAndy Fiddaman

Actions
Actions #1

Updated by Robert Mustacchi 6 months ago

There likely is a known cause here. Can you share the output of the diskinfo command please?

Actions #2

Updated by David Stes 6 months ago

diskinfo is not installed by default

I did a pkg install diskinfo in the OI 2022 boot envirronment.
I cannot install diskinfo in the OI 2020 boot environment because the OI repository is not having the old package any longer.and diskinfo was not installed by the OI 2020 text installer

# pkg list diskinfo
NAME (PUBLISHER)                                  VERSION                    IFO
diagnostic/diskinfo                               0.5.11-2022.0.0.21406      i--

Anyway here is the diskinfo output:

# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
SATA    c4t0d0                  WDC      WD5000AZLX-ZZZZZZZ  465.76 GiB   no  no 

The exact notice messages I am seeing also indicate it may have to do with

/etc/devices/devid_cache

Because the messages are:

Dec 11 10:00:16 openindiana genunix: [ID 390243 kern.info] Creating /etc/devices/devid_cache
Dec 11 10:01:27 openindiana zfs: [ID 844310 kern.notice] NOTICE: vdev_disk_open /dev/dsk/c4t0d0s1: devid mismatch: id1,sd@n50014ee212aaaaaa/b != id1,sd@SATA_____WDC_WD5000AZLX-7_____WD-WCCXXXXXXXXX/b
Dec 11 10:01:27 openindiana zfs: [ID 101897 kern.notice] NOTICE: vdev_disk_open /dev/dsk/c4t0d0s1: update devid from 'id1,sd@n50014ee212aaaaaa/b' to 'id1,sd@SATA_____WDC_WD5000AZLX-7_____WD-WCCXXXXXXXXX/b'
Dec 11 11:35:27 openindiana genunix: [ID 176336 kern.notice] devid register: devid for /pci@0,0/pci1028,98d@17/disk@0,0 does not match. stored: id1,sd@SATA_____WDC_WD5000AZLX-7_____WD-WCCXXXXXXXXX, new: id1,sd@n50014ee212aaaaaa.
Dec 11 11:35:27 openindiana zfs: [ID 101897 kern.notice] NOTICE: vdev_disk_open /dev/dsk/c4t0d0s1: update devid from 'id1,sd@SATA_____WDC_WD5000AZLX-7_____WD-WCCXXXXXXXXX/b' to 'id1,sd@n50014ee212aaaaaa/b'
Actions #3

Updated by David Stes 6 months ago

  • Related to Bug #14745: ZFS should handle unknown/invalid vdev devids gracefully added
Actions #4

Updated by David Stes 6 months ago

  • Related to Feature #14686: nvme should use namespace GUID for devid if available added
Actions #5

Updated by David Stes 6 months ago

  • Related to Bug #10622: ZFS should still check paths for devices that have no devid added
Actions #6

Updated by David Stes 6 months ago

  • Related to Bug #9683: Allow bypassing devid in vdev_disk_open() added
Actions #7

Updated by David Stes 6 months ago

Based on reading some other bug information, I tried setting the following parameter in /etc/system.d

In /etc/system.d/zfs

set zfs:vdev_disk_bypass_devid = 1

However if I then boot the BE from 2020:04 and the 2022.11 it still seems to "zpool import" the pool fine but it always seems to update and change the so-called vdev.

It now prints with the above /etc/system modification the following line

NOTICE: vdev_disk_open /dev/dsk/c4t0d0s1: update devid from "<none>" to "id1,sd@n50014ee212aaaaaa/b" 

I do not have any real problem with my simple setup with just one SATA disk, I just wonder what these messages mean.

Basically I think the "zpool import" succesfully imports my rpool but warns with NOTICE priority that the 2 BE use different naming schemes for the devid.

It also seems to update in effect the zpool label

The issue is easily reproducible by booting into the first BE then into the second BE and then the other way around boot back into the first BE : on those boots the system warns that the 2 BE use different names for the devid.

Actions #8

Updated by Jason King 6 months ago

It's likely due to this change:

commit 8118bef4ce6388ad51cc4ab94dbedc03621ee1e3
Author: Garrett D'Amore <>
Date: Thu Oct 14 12:40:17 2021 -0700

14765 SATL could decode page 83 for device IDs

Basically prior to this change, the system wasn't able to get the WWN (world wide name) of a SATA disk. The system prefers to use WWNs when available to create the devid, but when not, it tries to create one based on other information available from the disk (the 'SATA____WDC....' id).

The devids are also stored persistently, so it's just noting that what's on disk and what the kernel thinks the devid should be are different (because the newer kernels will use the WWN). It doesn't really harm anything as zfs and such sorts itself out (that wasn't always the case for boot disks, but that should all be fixed now).

Actions #9

Updated by David Stes 6 months ago

Sounds logical and then I guess this bug report is not really a bug and it can be closed ...

I have not experienced any real problem , the zpool open or update is working fine.

I suspected the messages were informational only.

Actions #10

Updated by David Stes 6 months ago

  • Related to Bug #12341: devid mismatch messages are too noisy added
Actions #11

Updated by David Stes 6 months ago

  • Related to Bug #14816: Boot from NVMe scans all devices after 14688 added
Actions #12

Updated by David Stes 6 months ago

I added an additional SATA disk to my system (Dell Precision). The first disk was a 500GB Western Digital Blue, the new disk is a Seagate Barracuda 500GB disk which is said to be a 512e disk (physical sector size 4096 bytes, emulates logical sector size of 512 bytes).

My new diskinfo is:

# diskinfo
TYPE    DISK                    VID      PID              SIZE          RMV SSD
SATA    c5t0d0                  WDC      WD5000AZLX-1234567  465.76 GiB   no  no
SATA    c5t2d0                  ATA      ST500DM009-ABCDEFG  465.76 GiB   no  no 

The code for the devid using wwn (world wide name) seems to work fine for both the WDC SATA disk and the Seagate Barracuda SATA disk.

I did not have an difficulty to get this to work. prtvtoc prints 512 byte sector size for both disks.

The command

zdb -C rpool

prints the wwn (world wide name) as devid and the full rpool mirror config.

Adding the Seagate Barracuda disk to the rpool went without problems using the command zpool attach.

My new rpool config is now

config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0

The SATA disks are attached internally.

# prtdiag | head
System Configuration: Dell Inc. Precision 3640 Tower
BIOS Configuration: Dell Inc. 1.18.0 09/02/2022

UEFI boot on the mirror works although that I did not test yet what happens if I revert back to an old kernel which uses the devid based on the disk INQUIRY instead of the WWN.

Actions #13

Updated by David Stes 5 months ago

If I reboot into the old 2020.04 OpenIndiana Illumos kernel, as could be expected the messages vdev_disk_open: appear for both disks (the Western Digital Blue and the Seagate Barracuda). However the old kernel automatically updates the devid on the rpool mirror and the mirror is imported automatically without any problems, it seems.

With the command

zdb -C rpool

it can be seen that with the old Illumos kernel the devid is based on the INQUIRY string of the SATA disks.

When rebooting back into the OpenIndiana Illumos 2022.11 kernel the devids are back changed to the WWN (world wide name) and the import also seems to work fine.

So there is no real problem as far as I can see, except for those messages being logged during boot on the console and in the messages file.

Actions #14

Updated by Igor Kozhukhov 5 months ago

you can remove /etc/zfs/zpool.cache and reboot.
it will fix warning on console with next reboot.

Actions

Also available in: Atom PDF