Bug #7104


increase indirect block size

Added by Matthew Ahrens over 6 years ago. Updated about 6 years ago.

zfs - Zettabyte File System
Start date:
Due date:
% Done:


Estimated time:
Gerrit CR:
External Bug:


The current default indirect block size is 16KB. We can improve
performance by increasing it to 128KB. This is especially helpful for
any workload that needs to read most of the metadata, e.g.
scrub/resilver, file deletion, filesystem deletion, and zfs send.

We also need to fix a few space estimation errors to make the tests

Actions #1

Updated by Matthew Ahrens about 6 years ago

Performance evaluation:

Best case performance improvement: 80%
Worst case performance improvement: 14%
I couldn't find a workload where this change decreased performance.

Read Test cases:

Storage pool on 3x 10,000 RPM SAS disks
A file with 10 million 8KB blocks.
Test cases each access one file at a time, doing random reads of 8K blocks, using 16 threads.
I measured the number of IOPS (reads that the application was able to do per second)

I varied the indirect blocksize from 16K (current default) to 128K (proposed change). I also varied the number and alignment of the reads, to ensure that we hit the desired number of indirect blocks.

I tried blocksizes 32K and 64K, and as expected, the results were between the 16K and 128K cases.

Read Test Case 1:

number of reads = 9,700; alignment = 1024 blocks
should read same number of indirect blocks, but more bandwidth req’d for larger indirects
larger indirects still perform better, perhaps due to less L2 indirects?

indirect=16K; perf=583 IOPS
read i/os issued to disks:
611 [L2 ZFS plain file]
6176 [L0 ZFS plain file]
6176 [L1 ZFS plain file]
KB read from disks (total):
4317 [L2 ZFS plain file]
6176 [L1 ZFS plain file]
49408 [L0 ZFS plain file]

indirect=128K; perf=665 IOPS
read i/os issued to disks :
10 [L2 ZFS plain file]
6190 [L1 ZFS plain file]
6190 [L0 ZFS plain file]
KB read from disks (total):
537 [L2 ZFS plain file]
34045 [L1 ZFS plain file]
49520 [L0 ZFS plain file]
Read Test case 2:

number of reads = 97,000; alignment = 128 blocks
should read same MB of indirect blocks; but larger indirects requires less indirect blocks to be read

indirect=16K; perf=700 IOPS
read i/os issued to disks:
611 [L2 ZFS plain file]
55811 [L0 ZFS plain file]
55811 [L1 ZFS plain file]
KB read from disks (total):
4317 [L2 ZFS plain file]
55811 [L1 ZFS plain file]
446488 [L0 ZFS plain file]

indirect=128K; perf=1260 IOPS
read i/os issued to disks:
10 [L2 ZFS plain file]
9766 [L1 ZFS plain file]
55547 [L0 ZFS plain file]
KB read from disks (total):
537 [L2 ZFS plain file]
53711 [L1 ZFS plain file]
444376 [L0 ZFS plain file]
Write test cases

Storage pool on 3x 100GB 10,000 RPM SAS disks
One large file with 8KB recordsize
Test case is 8KB random writes, from 4 threads

Write Test Case 1:

No compression, 240GB file
Note: 80% allocated

indirect=16K: perf=1100 IOPS
indirect=128K: perf=1150 IOPS

Write Test Case 2:

LZ4 compression, 700GB file
Note: 2.90x compression ratio; 75% allocated

indirect=16K: perf=850 IOPS (note: 95% frag)
indirect=128K: perf=940 IOPS (note: 85% frag)

Actions #2

Updated by Electric Monk about 6 years ago

  • Status changed from New to Closed
  • % Done changed from 0 to 100

git commit 4b5c8e93cab28d3c65ba9d407fd8f46e3be1db1c

commit  4b5c8e93cab28d3c65ba9d407fd8f46e3be1db1c
Author: Matthew Ahrens <>
Date:   2016-07-14T19:52:34.000Z

    7104 increase indirect block size
    Reviewed by: George Wilson <>
    Reviewed by: Paul Dagnelie <>
    Reviewed by: Dan McDonald <>
    Approved by: Robert Mustacchi <>


Also available in: Atom PDF