Bug #175
zfs vdev cache consumes excessive memory
Added by Gordon Ross over 10 years ago.
Updated about 9 years ago.
Description
See OpenSolaris CR 6684116
but note that the work-around suggested there is less than ideal.
Better to set zfs_vdev_cache_size to zero, i.e. in /etc/system
set zfs:zfs_vdev_cache_size = 0
After some performance verification, it might make sense to
change the default to zero in uts/common/fs/zfs/vdev_cache.c
- Category set to kernel
- Assignee set to Garrett D'Amore
Solaris 11 has been running with the tunable set to zero for a while now. Eventually we ought to just remove the code.
We end up storing 10MB of readahead per vdev, and so if you have many
vdevs (say, if you sell storage systems), that really sucks on main
memory.
We find that the cache is usually very under utilized in the field. (30%-70%)
- Status changed from New to Resolved
- Difficulty set to Medium
- Tags set to needs-triage
I object to removal of the code for everybody based on its inefficiency for somebody. For example, vdev prefetch was very efficient during a scrub on my systems and boosted the scrub speed (minus several hours off the total count for two scrubs).
In fact, I propose to expand the feature by making a separate rolling cache for non-metadata sectors which are currently discarded from prefetched data.
Tuning the two cache sizes is the end-user's adventure, and zero-size defaults can protect the multiple-drive systems with miniature RAM. illumos's task is to provide the mechanisms that can help it suit everyone - from single-spindle laptops to petabyte arrays ;)
Details are tracked here: https://www.illumos.org/issues/2017
Also available in: Atom
PDF