ZFS sorted scans
Review Request #1226 - Created Oct. 8, 2018 and updated
This originated from ZFS On Linux, as https://github.com/zfsonlinux/zfs/commit/d4a72f23863382bdf6d0ae33196f5b5decbc48fd https://svnweb.freebsd.org/base?view=revision&revision=334844 During scans (scrubs or resilvers), it sorts the blocks in each transaction group by block offset; the result can be a significant improvement. (On my test system just now, which I put some effort to introduce fragmentation into the pool since I set it up yesterday, a scrub went from 1h2m to 33.5m with the changes.) I've seen similar rations on production systems. FreeNAS has had these changes since Oct 2017. Scrub & rebuild pools. Note times for performance analysis. The pools are compatible with systems without the changes, so bouncing back and forth between two versions is possible, and I've used that for correctness-checking.
zpool offline rpool disk and then later zpool online rpool disk
In my test systems the speedup on phydical disks is very noticeable, 4x 4TB raidz1 (7200RPM WD Black SATA), the scrub/resilver is down to 2 hours from 4 hours (3.5TB allocated data). The virtual disks under vmware fusion (on top of SSD) are not that drastic, but still significant and overall impression is consistent with notes from ZoL and FreeBSD.
I think this is similar to, but supersedes https://github.com/openzfs/openzfs/pull/648/, is that right?
There were a few follow on commits which George Wilson or Tom Caputi can point you at, including a8b2e30685c9214ccfd0181977540e080340df4e "Support re-prioritizing asynchronous prefetches"
add following patches (as suggested):
a76f3d0437e5e974f0f748f8735af3539443b388: Fix deadlock in IO pipeline
c26cf0966d131b722c32f8ccecfe5791a789d975: Fix zio->io_priority failed (7 < 6) assert
Revision 2 (+3215 -860)