Project

General

Profile

Actions

Bug #1796

closed

"ZFS HOLD" should not be used when doing "ZFS SEND" from a read-only pool

Added by Jim Klimov over 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
Normal
Category:
zfs - Zettabyte File System
Start date:
2011-11-19
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
needs-triage
Gerrit CR:
External Bug:

Description

I'm in the process of repairing a corrupted unmirrored rpool - my machine crashes when trying to import the rpool in any sort of read-write mode, however I got it to import with "-o readonly=on" option while booted from an oi_148a liveusb.

My current idea was to evacuate all reachable data by "zfs send"ing the whole rpool dataset hierarchy to my redundant data pool. Afterwards I intend to recreate rpool from scratch with my current boot environment in place and copies=2.

I struck an unexpected problem: modern "zfs send" uses the "zfs hold" feature to lock the snapshots from destruction. However, on a readonly pool these holds don't succeed and the send never occurs:

root@openindiana:~# zfs send -R rpool/ROOT/oi_148a@20111028-01 | \
zfs recv -vnFd pool/rpool-backup
cannot hold 'rpool/ROOT/oi_148a@20110317-03': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110319-01': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110322-01': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110401': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110430': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20111028-01': pool is read-only

IMHO this is a bug (Richard Elling thinks so too), and some checks are due in zfs code, to NOT do "zfs hold" on a read-only pool, or to not bail out fatally if holds fail (perhaps a command-line option for this behavior?).

And for such a repair operation this is a bad bug, because mounting read-only is nearly the only access I have to this pool, and without a working "ZFS SEND" in place I have to use non-zfs methods to back-up and restore that rpool (and carefully track the zfs/zpool attributes set on all datasets of rpool before and after its re-creation). With my smallish rpool this is not a great problem, but when recovering systems with lots of datasets and data, this might be near-fatal.

Actions #1

Updated by Alexander Eremin over 11 years ago

Not sure that cmd option required, but this is possible fix:

diff -r afa9f03c945e usr/src/lib/libzfs/common/libzfs_sendrecv.c
--- a/usr/src/lib/libzfs/common/libzfs_sendrecv.c    Thu Nov 17 14:34:33 2011 +0300
+++ b/usr/src/lib/libzfs/common/libzfs_sendrecv.c    Tue Dec 06 17:51:41 2011 +0300
@@ -962,6 +962,9 @@
     int error = 0;
     char *thissnap;

+    if (zfs_prop_get_int(zhp, ZFS_PROP_READONLY))
+        return (0);
+
     assert(zhp->zfs_type == ZFS_TYPE_SNAPSHOT);

     /*

Actions #2

Updated by Matthew Ahrens almost 11 years ago

The above diff is incorrect: it checks if the filesystem's readonly property is 'on'. You need to check if the pool is readonly.

Actions #3

Updated by Christopher Siden almost 11 years ago

  • Assignee set to Christopher Siden
Actions #4

Updated by Christopher Siden almost 11 years ago

  • Status changed from New to In Progress
Actions #5

Updated by Eric Schrock almost 11 years ago

  • Status changed from In Progress to Resolved

changeset: 13749:df4cd82e2b60
tag: tip
user: Christopher Siden <>
date: Thu Jul 12 05:32:45 2012 -0700

description:
1796 "ZFS HOLD" should not be used when doing "ZFS SEND" from a read-only pool
2871 support for __ZFS_POOL_RESTRICT used by ZFS test suite
2903 zfs destroy -d does not work
2957 zfs destroy -R/r sometimes fails when removing defer-destroyed snapshot
Reviewed by: Matthew Ahrens <>
Reviewed by: George Wilson <>
Approved by: Eric Schrock <

Actions

Also available in: Atom PDF