Bug #419
closedzpool/zfs incosistencies inside a non-global zone
100%
Description
Once a zone is installed, the zone shows the pool and fs of the zonepath. After a reboot, they disappear. This is on OpenIndiana b147.
root@trim:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zones 464G 16.3G 448G 3% 1.00x ONLINE -
root@trim:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zones 16.3G 440G 67K /zones
zones/trim 518M 4.49G 33K /zones/trim
zones/trim/ROOT 518M 4.49G 31K legacy
zones/trim/ROOT/zbe 518M 4.49G 518M legacy
root@trim:~# zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 0h8m with 0 errors on Tue Nov 2 13:11:32 2010
config:
NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t0d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
errors: No known data errors
root@trim:~#
root@trim:~# init 6
root@trim:~# You have running jobs
suser@trim:~$
[Connection to zone 'trim' pts/5 closed]
...
[Connected to zone 'trim' pts/5]
Last login: Mon Nov 15 11:02:47 on pts/5
OpenIndiana SunOS 5.11 oi_147 September 2010
suser@trim:~$ su -
OpenIndiana SunOS 5.11 oi_147 September 2010
root@trim:~# uptime
11:03am up 1 user, load average: 0.11, 0.07, 0.05
root@trim:~# zpool list
no pools available
root@trim:~# zpool status
no pools available
root@trim:~# zfs list
no datasets available
root@trim:~#
Not showing these zonepath pool/fs might be a good thing, since the zone user can't create file systems inside zonepath file system. Then, they would not be tied to the zone's boot environment. If we need a ZFS file system inside a zone, it may be best to create a ZFS fs in global zone and then delegate the dataset to the zone (separate from zone/ROOT).
Updated by Garrett D'Amore over 11 years ago
- Project changed from site to illumos gate
Updated by Andrzej Szeszo over 11 years ago
Anil, are you using "ip-type: exclusive"? If so - do you mind testing what happens with "ip-type: shared"?
I ma experiencing the same behaviour - no pools available/no datasets available after reboot but only when ip-type is set to exclusive.
A workaround is to add ROOT dataset to a list of delegated datasets.
zonecfg -z trim "add dataset; set name=zones/trim/ROOT; end"
Andrzej
Updated by Colin Ellis over 11 years ago
Additional info:
This issue breaks reboot of zones in the following manner:
the line /usr/sbin/zfs list -H -t filesystem -o name $ZONEPATH_DS/ROOT
in /usr/lib/brand/shared/query
gets called by zoneadmd, vplat.c in get_implicit_datasets.
This fails with "dataset does not exist" and causes zone to boot without mounts.
Updated by Anil Jangity over 11 years ago
Yes, these are ip-type exclusive. I don't have immediate access to a server I can do the ip-type shared test - if this changes, I will update here.
Updated by Garrett D'Amore over 11 years ago
- Category set to cmd - userland programs
- Assignee set to Garrett D'Amore
- % Done changed from 0 to 50
I think the problem is corruption in zoneadm. It appears that the pool_name temporary value is allocated using MAXNAMELEN (256), but is accessed using a larger value of MAXPATHLEN (1024), resulting in corruption. Here's a possible fix:
a usr/src/cmd/zoneadmd/vplat.c --- a/usr/src/cmd/zoneadmd/vplat.c Thu Jan 13 21:05:28 2011 -0800 +++ b/usr/src/cmd/zoneadmd/vplat.c Mon Jan 17 17:20:29 2011 -0800 @@ -2515,10 +2515,10 @@ if (err != DLADM_STATUS_OK) { zerror(zlogp, B_FALSE, "WARNING: unable to set " "pool %s to datalink %s", pool_name, dlname); - bzero(pool_name, MAXPATHLEN); - } - } else { - bzero(pool_name, MAXPATHLEN); + bzero(pool_name, sizeof (pool_name)); + } + } else { + bzero(pool_name, sizeof (pool_name)); } return (0); } @@ -3046,7 +3046,7 @@ return (-1); } - bzero(pool_name, MAXPATHLEN); + bzero(pool_name, sizeof (pool_name)); for (i = 0, dllink = dllinks; i < dlnum; i++, dllink++) { err = dladm_set_linkprop(dld_handle, *dllink, "pool", NULL, 0, DLADM_OPT_ACTIVE); @@ -4520,7 +4520,8 @@ } /* Update saved pool name in case it has changed */ - (void) zonecfg_get_poolname(handle, zone_name, pool_name, MAXPATHLEN); + (void) zonecfg_get_poolname(handle, zone_name, pool_name, + sizeof (pool_name)); zonecfg_fini_handle(handle); return (Z_OK);
Updated by Colin Ellis over 11 years ago
Tested patch on:
zoneadm -z <zone> reboot and 'reboot' command inside zone. Both show datasets as expected.
zoneadm halt/boot cycle also confirmed as working.
Updated by Garrett D'Amore over 11 years ago
- Status changed from New to Resolved
- % Done changed from 50 to 100
- Estimated time set to 2.00 h
Pushed this fix today:
changeset: 13266:e573198ae730
tag: tip
user: Garrett D'Amore <garrett@nexenta.com>
date: Mon Jan 17 21:18:02 2011 -0800
description:
419 zpool/zfs incosistencies inside a non-global zone
Reviewed by: panamayacht@gmail.com
Approved by: gwr@nexenta.com