Bug #975
closeddisable disksort for SSDs
100%
Description
disksort is pointless/useless for solid state drives, so we should disable it -- it may actually be harmful in fact (for IOPS). We can leverage the f_is_solid_state flag in sd for this.
Updated by Garrett D'Amore almost 12 years ago
- Difficulty set to Medium
- Tags set to needs-triage
As someone asked for an explanation:
disksort is used to sort operations by LBA, and is intended to help with disks that have a non-zero seek time (where the further the data is apart, the higher the seek time is.) Solid state devices have no seek time, and so queuing things up or spending time sorting requests is just wasteful, and hurts latency.
Updated by Garrett D'Amore almost 12 years ago
- % Done changed from 0 to 90
diff r f7ba8ec46a21 -r cd42e386f2bd usr/src/uts/common/io/scsi/targets/sd.c a/usr/src/uts/common/io/scsi/targets/sd.c Thu Apr 28 12:40:06 2011 -0500
--
+++ b/usr/src/uts/common/io/scsi/targets/sd.c Sun May 01 21:01:16 2011 -0700
Diff looks like this:
@ -31674,6 +31674,8
@
/
if (inqb14 0 && inqb1[5] 1) {
un->un_f_is_solid_state = TRUE;
+ / solid state drives don't need disksort */
+ un->un_f_disksort_disabled = TRUE;
}
mutex_exit(SD_MUTEX(un));
} else if (rval != 0) {
Updated by Garrett D'Amore almost 12 years ago
- Status changed from In Progress to Resolved
- % Done changed from 90 to 100
Resolved in:
changeset: 13360:c28d415b5009
tag: tip
user: Garrett D'Amore <garrett@nexenta.com>
date: Mon May 02 12:32:04 2011 -0700
description:
975 disable disksort for SSDs
Reviewed by: Jason King <jason.brian.king@gmail.com>
Reviewed by: Rich Lowe <richlowe@richlowe.net>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Dan McDonald <danmcd@nexenta.com>
Updated by Jim Klimov about 11 years ago
Ummm... a sort of lame question: does disksort in "sd" only apply to reads, or to queued writes as well?
IIRC, the SSDs internally use large pages like 256KB which must be reprogrammed on the flash chip as one big operation (with SSD's internal COW or similar mechanism), so disksorting writes for SSDs could still make sense - if some of the queued sectors land into the same SSD block for reprogramming, the write OP should complete faster (one write instead of several) and induce less wear on the SSD hardware (same as above).
Updated by Garrett D'Amore about 11 years ago
It applies to both reads and writes.
However, with typical access patterns, I suspect you might not see many contiguous writes. Our experience is that most activity is fairly random. Also, the queue depth plays a role here.
The good news is, the value is tunable. So you can experiment with it yourself.
Updated by Garrett D'Amore about 11 years ago
Oh, one other thing: most modern devices have a fairly deep on-device queue, so we expect that if you have a reasonably deep queue (zfs_vdev_maxpending) for an SSD, then the device will internally use its own elevator sort; it knows how best to sort things optimally. The legacy elevator sort in sd made sense with drives with little or no buffer space.