increase size of dbuf cache to reduce indirect block decompression
With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed.
If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.)
In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%).
Updated by Electric Monk over 4 years ago
- Status changed from New to Closed
- % Done changed from 0 to 100
commit 268bbb2a2fa79c36d4695d13a595ba50a7754b76 Author: George Wilson <email@example.com> Date: 2018-03-21T15:24:55.000Z 9188 increase size of dbuf cache to reduce indirect block decompression Reviewed by: Dan Kimmel <firstname.lastname@example.org> Reviewed by: Prashanth Sreenivasa <email@example.com> Reviewed by: Paul Dagnelie <firstname.lastname@example.org> Reviewed by: Sanjay Nadkarni <email@example.com> Reviewed by: Allan Jude <firstname.lastname@example.org> Reviewed by: Igor Kozhukhov <email@example.com> Approved by: Garrett D'Amore <firstname.lastname@example.org>