too many NFS threads actually hurts performance
In a previous change ( see #7651 ) the number of NFS server threads was increases from 16 to 1024.
It turns out that in most configurations this actually performs worse than setting max_threads = 256.
We usually see the following when looking at customer systems:
With max number of nfsd threads set too high, those threads are usually stay at the top of the process list. 'prstat' shows CPU percentage utilization for nfsd twice higher than for zpool, storage backend
Thread state inspection shows most of nfsd threads in zio_wait or in zil_commit. Yet ZIO doesn't get adequate amount of CPU cycles to process requests. We are queuing requests in ZIO - while we could instead utilize RAM, to work with listen_backlog(s)
In a nutshell, we would like the storage_backend to process requests as fast as possible, and that means, ZIO shouldn't be starved out of CPU cycles.
servers=1024 would work great on systems with fast dual-socket 16-core Xeons. That's not the case for dual quad- and hexa-core rigs.
We've historically used 256 as the default number of nfsd threads, and that has worked well on the vast majority of systems in the field.
Updated by Dan McDonald over 1 year ago
- Tags set to nfs-zone
The fix for this is currently in the nfs-zone branches. The patchfile is in http://kebe.com/~danmcd/webrevs/nfs-zone/patchsets/gate/0014-NEX-18312-Max-number-of-nfsd-threads-is-set-too-high.patch
Updated by Electric Monk over 1 year ago
- Status changed from New to Closed
- % Done changed from 0 to 100
commit 9f9c12cd25ff2bda305600e1620ad2eecad4ef19 Author: Evan Layton <firstname.lastname@example.org> Date: 2020-01-24T21:03:53.000Z 12243 too many NFS threads actually hurts performance Reviewed by: Rick McNeal <email@example.com> Reviewed by: Rob Gittins <firstname.lastname@example.org> Reviewed by: Dan McDonald <email@example.com> Approved by: Gordon Ross <firstname.lastname@example.org>