zfs recv should prefetch indirect blocks
While running 'zfs recv' we noticed that every 128th 8K block required a read. We were seeing that restore_write() was calling dmu_tx_hold_write() and the indirect block was not cached. We should prefetch upcoming indirect blocks to avoid having to go to disk and blocking the restore_write().
This patch addresses two primary issues.
The first is the one addressed in the bug: zfs incremental receive currently doesn't prefetch the indirect blocks that it needs, which means that if you have a very sparse stream, you become bound by disk latency, rather than throughput. This patch addresses that by adding in a stage before records are applied. First, records are read off the stream, and necessary indirect blocks are prefetched. The records are then placed on a blocking queue. Another thread pulls records off the blocking queue and applies them. By adding this blocking queue and prefetch stage, we allow many IOs to be issued in parallel, and make it so no one is bottlenecked by disk latency rather than disk throughput.
The second issue is one that came up while testing the first issue. With a sufficiently sparse stream, the thread issuing the prefetch will actually become bottlenecked, since dbuf_prefetch has to synchronously read in the indirect blocks above the block being prefetched before it can issue the asynchronous read of the desired block. This was resolved by changing dbuf_prefetch so that it is fully asynchronous; each read invokes a callback, which issues a read for the next block down the chain. This allows to us to be as parallel as possible, even on very sparse streams.
Updated by Electric Monk about 4 years ago
- Status changed from New to Closed
- % Done changed from 0 to 100
commit a2cdcdd260232b58202b11a9bfc0103c9449ed52 Author: Paul Dagnelie <email@example.com> Date: 2015-07-20T05:34:19.000Z 5960 zfs recv should prefetch indirect blocks 5925 zfs receive -o origin= Reviewed by: Prakash Surya <firstname.lastname@example.org> Reviewed by: Matthew Ahrens <email@example.com>