Bug #5160
closedzfsctl_snapshot_inactive() can leak a vnode hold
0%
Description
If zfsctl_snapshot_inactive() is called on a vnode with v_count > 1, it leaks a hold on the target vnode (vn) and directory vnode (dvp).
Steps to reproduce:
create a fs with 100 snapshots, e.g. test/snaps
have a thread do:
while true; do ls -l /test/snaps/.zfs/snapshot >/dev/null; done
have another thread do:
while true; do zfs promote test/clone; zfs promote test/snaps; done
use dtrace (with experimental D; available here: https://github.com/ahrens/illumos/tree/dpp) to delay & observe:
dtrace w -xd \\>good) self->good=0; else printf("bad return");}' \\
-n 'vn_rele:entry/args0 == (void*)0xffffff01dd42ce80ULL/{[stack()]=count(); chill(100000);}' \\
[stack()]=count()}' \\
-n 'zfsctl_snapshot_inactive:entry{ if (args[0]->v_count > 1) trace(args[0]->v_count); self->vp=args[0];}' \\
-n 'gfs_vop_inactive:entry/callers["zfsctl_snapshot_inactive"]/{self->good=1;
-n 'zfsctl_snapshot_inactive:return{if (self
n 'gfs_dir_lookup:return/callers["zfsctl_snapshot_inactive"] && self>vp->v_count > 1/{trace(self->vp->v_count)}'
the address is found by selecting one of the output of this at random:
dtrace -n 'zfsctl_snapshot_inactive:entry{print(args0);'
when you see "bad return", we have hit the bug. Then doing "zfs umount test/snaps" will fail with EBUSY.
Related issues
Updated by Alek Pinchuk about 7 years ago
- Related to Bug #5768: zfsctl_snapshot_inactive() can leak a vnode hold added