Feature #461
openmissing /etc/zfs/zpool.cache
0%
Description
I guess it's a regression of removing python dependencies from ZFS.
Steps to reproduce:
# rm /etc/zfs/zpool.cache # init 6 # zdb cannot open '/etc/zfs/zpool.cache': No such file or directory # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 67G 55.9G 11.1G 83% 1.00x ONLINE -
Updated by Garrett D'Amore over 12 years ago
This isn't a very good bug report, since your reproduce steps don't demonstrate a real problem to me.
You've deleted the cache file.
I'm not sure what you expected to happen?
I don't see any difference in behavior between b134 and illumos, but maybe I'm missing something here?
Updated by Piotr Jasiukajtis over 12 years ago
You're right, it's not related to illumos.
I deleted zpool.cache because of pool recovery, then manually importing zpools recreated cache file.
The issue here is if you have only one pool (rpool) you have to create or import another pool in order to get zdb working.
Maybe this should be a RFE for zpool or zdb to recreate a cache file if needed?
Updated by Matt Lewandowsky over 12 years ago
- Tracker changed from Bug to Feature
Re-flagging as Feature due to request from estibi on IRC.
Updated by Garrett D'Amore over 12 years ago
- Priority changed from Normal to Low
zdb is actually a debugging & diagnostic tool. I agree there should be a way to recreate the zpool cache.
However, deleting your zpool.cache is a bad idea. There are many other state files (e.g. /etc/vfstab, /etc/passwd, /etc/name_to_major, /etc/driver_aliases) which if deleted or corrupted will cause bad things to happen. Special handling for this data file may or may not be useful.
Upshot, don't delete that file!
Reducing this to "low" priority, since normal users should never need this feature.