Project

General

Profile

Bug #8940

Sending an intra-pool resumable send stream may result in EXDEV

Added by Ezomori Nozomu over 1 year ago. Updated over 1 year ago.

Status:
Closed
Priority:
Low
Category:
zfs - Zettabyte File System
Start date:
2017-12-28
Due date:
% Done:

100%

Estimated time:
Difficulty:
Bite-size
Tags:

Description

"zfs send -t <token>" for an incremental send should be able to resume successfully when sending to the same pool: a subtle issue in zfs_iter_children() doesn't currently allow this.

Because resuming from a token requires "guid" -> "dataset" mapping (guid_to_name()), we have to walk the whole hierarchy to find the right snapshots to send.
When resuming an incremental send both source and destination live in the same pool and have the same guid: this is where zfs_iter_children() gets confused and picks up the wrong snapshot, so we end up trying to send an incremental "destination@snap1 -> source@snap2" stream instead of "source@snap1 -> source@snap2": this fails with an "Invalid cross-device link" (EXDEV) error.

root@openindiana:~# uname -a
SunOS openindiana 5.11 master-0-gb3c0a3b184 i86pc i386 i86pc
root@openindiana:~# 
root@openindiana:~# 
root@openindiana:~# POOLNAME='testpool'
root@openindiana:~# TMPDIR='/tmp'
root@openindiana:~# zpool destroy -f $POOLNAME
root@openindiana:~# rm -f $TMPDIR/zpool_$POOLNAME.dat
root@openindiana:~# mkfile 128m $TMPDIR/zpool_$POOLNAME.dat
root@openindiana:~# zpool create $POOLNAME $TMPDIR/zpool_$POOLNAME.dat
root@openindiana:~# #
root@openindiana:~# dd if=/dev/urandom of=/$POOLNAME/data.bin bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 2.964483 secs (3537130 bytes/sec)
root@openindiana:~# zfs snapshot $POOLNAME@snap1
root@openindiana:~# zfs send $POOLNAME@snap1 | zfs recv -s $POOLNAME/mirror
root@openindiana:~# #
root@openindiana:~# dd if=/dev/urandom of=/$POOLNAME/data.bin bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 2.655169 secs (3949188 bytes/sec)
root@openindiana:~# zfs snapshot $POOLNAME@snap2
root@openindiana:~# zfs send -i $POOLNAME@snap1 $POOLNAME@snap2 | dd bs=1M count=1 | zfs recv -s $POOLNAME/mirror
0+1 records in
0+1 records out
312 bytes transferred in 0.010768 secs (28975 bytes/sec)
cannot receive incremental stream: checksum mismatch or incomplete stream.
Partially received snapshot is saved.
A resuming stream can be generated on the sending system by running:
    zfs send -t 1-c12d90fb3-e0-789c636064000310a501c49c50360710a715e5e7a69766a63040814f53c88cd7367f5a14806c762475f94959a9c925103e0860c8a7a515a79630c001489e0d493ea9b224b59801551e597f493ec4155bb5a6a41dccbdd4a08124cf0996cf4bcc4d05d2a9c52505f9f9390ec57989054610b30083501f20
root@openindiana:~# #
root@openindiana:~# TOKEN=$(zfs get -H -o value receive_resume_token $POOLNAME/mirror)
root@openindiana:~# zfs send -nP -t $TOKEN > /dev/null
resume token contents:
nvlist version: 0
        fromguid = 0x84fc3ceb9854824c
        object = 0x1
        offset = 0x0
        bytes = 0x0
        toguid = 0x80d26dc166942ab5
        toname = testpool@snap2
incremental     testpool/mirror@snap1   testpool@snap2
root@openindiana:~# zfs send -t $TOKEN > /dev/null
warning: cannot send 'testpool@snap2': Cross-device link
root@openindiana:~# 

Reported on the ZFSonLinux github repo, already fixed by https://github.com/zfsonlinux/zfs/pull/6623.

History

#1

Updated by Electric Monk over 1 year ago

  • Status changed from New to Closed

git commit 544132fce3fa6583f01318f9559adc46614343a7

commit  544132fce3fa6583f01318f9559adc46614343a7
Author: loli10K <ezomori.nozomu@gmail.com>
Date:   2018-02-13T16:26:59.000Z

    8940 Sending an intra-pool resumable send stream may result in EXDEV
    Reviewed by: Paul Dagnelie <pcd@delphix.com>
    Reviewed by: Matthew Ahrens <mahrens@delphix.com>
    Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>

Also available in: Atom PDF