Project

General

Profile

Actions

Bug #3740

closed

Poor ZFS send / receive performance due to snapshot hold / release processing

Added by Steven Hartland about 10 years ago. Updated almost 10 years ago.

Status:
Closed
Priority:
Normal
Category:
zfs - Zettabyte File System
Start date:
2013-04-23
Due date:
% Done:

100%

Estimated time:
Difficulty:
Bite-size
Tags:
needs-triage
Gerrit CR:
External Bug:

Description

I've got a fairly simple setup here which has a hierarchy of zfs volumes which we're looking to sync between two nodes.

No problem I thought the data doesn't change much, use incremental snapshots with send / recv.

So implemented the script to do this but it was taking 2-3mins to do the zfs send even when there's virtually no changes e.g. 20MB

Digging into the issue the slowness is being caused by the snapshot hold processing, which iirc was added by zpool v18.

The difference between a send that does a hold and one which doesn't (hacked libzfs to return (0) from hold_for_send) is night and day:-

With hold on HDD
time -h zfs send -R hdd/test@7 > /dev/null
20.21s real 0.02s user 0.23s sys Without hold on HDD
time -h zfs send -R hdd/test@7 > /dev/null
0.15s real 0.02s user 0.12s sys

Watching the disk with gstat when doing the send with hold I see lots of write IO/s maxing out the disk for ~20 seconds.

The structure is:-
zfs list
NAME USED AVAIL REFER MOUNTPOINT
hdd 28.0M 913G 31K /hdd
hdd/test 25.7M 913G 42K /test
hdd/test/site1 38K 913G 31K /test/site1
hdd/test/site10 38K 913G 31K /test/site10
hdd/test/site2 38K 913G 31K /test/site2
hdd/test/site3 38K 913G 31K /test/site3
hdd/test/site4 38K 913G 31K /test/site4
hdd/test/site5 38K 913G 31K /test/site5
hdd/test/site6 25.3M 913G 25.3M /test/site6
hdd/test/site7 38K 913G 31K /test/site7
hdd/test/site8 38K 913G 31K /test/site8
hdd/test/site9 38K 913G 31K /test/site9

hdd/test contains 8 recursive snapshots:-
zfs list -t snapshot -d 1 hdd/test
NAME USED AVAIL REFER MOUNTPOINT
hdd/test@initial 1K - 31K -
hdd/test@1 1K - 31K -
hdd/test@2 1K - 31K -
hdd/test@3 1K - 31K -
hdd/test@4 1K - 31K -
hdd/test@5 1K - 31K -
hdd/test@6 1K - 31K -
hdd/test@7 0 - 42K -
...

Looking at the code, currently zfs_send does a "user hold" per snapshot when processing and the kernel does the same when it processes the release on exit.

This requires 2 * N * dsl sync's where N = total number of snapshots in the send.

The attached patch changes send to use a "dry run" pass to collate the details of the required holds and then process them in a single kernel lzc_hold call.

When the kernel processes the release on exit it also now uses a single dsl sync task by simply re-using dsl_dataset_user_release.

This significantly reduces the time required, as well as IO. For this test case it reduced the processing time from over 20 seconds to under 1 second, by reducing the number of dsl syncs from 2 * N to just 2 ,one for hold and one for release on exit.

For reference all this has has been tested against FreeBSD 10-Current r249664.


Files

zfs-send-hold-opt.patch (9.42 KB) zfs-send-hold-opt.patch Patch to fix reported issue Steven Hartland, 2013-04-23 03:35 PM
zfs-send-hold-opt-v2.patch (14.2 KB) zfs-send-hold-opt-v2.patch Steven Hartland, 2013-04-24 07:57 PM

Related issues

Related to illumos gate - Bug #3829: fix for 3740 changed behavior of zfs destroy/hold/release ioctlClosedMatthew Ahrens2013-06-18

Actions
Actions #1

Updated by Steven Hartland about 10 years ago

New patch version attached, this eliminates the separate "release temporary holds" code path creating a single path for snapshot releases.

This means that clean up of temporary holds on pool open runs much quicker, as it now only requires one dsl sync. In addition it should make the code easier to maintain in the future, due to lack of duplication.

While I'm here:-
  • Remove some unused structures and ensure that enoent_ok is processed by zfs_hold consistently.
  • Fix nvlist_t leak in zfs_release_one
Actions #3

Updated by Christopher Siden almost 10 years ago

  • Status changed from New to Closed
  • Assignee set to Christopher Siden
  • % Done changed from 0 to 100
commit a7a845e
Author: Steven Hartland <smh@freebsd.org>
Date:   Tue Jun 11 23:01:53 2013

    3740 Poor ZFS send / receive performance due to snapshot hold / releas
    Reviewed by: Matthew Ahrens <mahrens@delphix.com>
    Approved by: Christopher Siden <christopher.siden@delphix.com>
Actions

Also available in: Atom PDF