Project

General

Profile

Bug #6696

Per-client NFS server IOPS, bandwidth, and latency kstats

Added by Marcel Telka over 3 years ago. Updated about 3 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
nfs - NFS server and client
Start date:
2016-02-29
Due date:
% Done:

0%

Estimated time:
Difficulty:
Hard
Tags:
needs-triage

Description

As an enhancement to #6090 we would like to have the per-client and per-client/per-share IOPS, bandwidth, and latency kstats.


Related issues

Related to illumos gate - Bug #6090: IOPS, bandwidth, and latency kstats for NFS serverRejected2015-07-29

Actions
Related to illumos gate - Bug #6472: Panic in rfs4_compound_kstat_res()Closed2015-11-22

Actions
Related to illumos gate - Bug #6476: Taking exported_lock RW_READER lock in common_dispatch() can cause deadlock in nfssrvClosed2015-11-24

Actions
Related to illumos gate - Bug #6747: xdr_READDIR4res() bypass the XDR mblk APIClosed2016-03-13

Actions
Related to illumos gate - Bug #6697: Aggregated IOPS, bandwidth, and latency kstats for NFS serverRejected2016-03-01

Actions

History

#1

Updated by Marcel Telka over 3 years ago

  • Related to Bug #6090: IOPS, bandwidth, and latency kstats for NFS server added
#2

Updated by Marcel Telka over 3 years ago

  • Related to Bug #6472: Panic in rfs4_compound_kstat_res() added
#3

Updated by Marcel Telka over 3 years ago

  • Related to Bug #6476: Taking exported_lock RW_READER lock in common_dispatch() can cause deadlock in nfssrv added
#4

Updated by Marcel Telka over 3 years ago

Generic per-share and per-client stats

Implementation

For every shared path there is generic per-share helper named kstat implemented to publish shared path and type of share. Similarly, for every client there is generic per-client helper named kstat implemented to publish client's address.

Type Module Instance Name Class
generic per-share stats nfssrv SHARE_ID share misc
generic per-client stats CLIENT_ID client

Both SHARE_ID and CLIENT_ID are greater than zero and uniquely identify the particular share or client respectively.

The generic per-share named kstat have these entries:

Name Comment
path shared path
filesystem either real or pseudo

The generic per-client named kstat have these entries:

Name Comment
addr_family address family (numeric value from AF_* constants)
address client's address (in presentation form)

Usage examples

List of available per-share stats (paths):

# kstat -p -c misc nfssrv::share:path
nfssrv:1:share:path     /
nfssrv:2:share:path     /export
nfssrv:3:share:path     /
nfssrv:4:share:path     /export/home
nfssrv:5:share:path     /export/home/admin
nfssrv:6:share:path     /tmp
#

Complete generic kstat for the /export share (the SHARE_ID is 2):

# kstat -p -c misc nfssrv:2:share
nfssrv:2:share:class    misc
nfssrv:2:share:crtime   93642.530734218
nfssrv:2:share:filesystem       real
nfssrv:2:share:path     /export
nfssrv:2:share:snaptime 739630.489002210
#

List of available per-client stats (client's addresses):

# kstat -p -c misc nfssrv::client:address
nfssrv:3:client:address 10.0.100.2
#

Complete generic kstat for client 10.0.100.2 (the CLIENT_ID is 3):

# kstat -p -c misc nfssrv:3:client
nfssrv:3:client:addr_family     2
nfssrv:3:client:address 10.0.100.2
nfssrv:3:client:class   misc
nfssrv:3:client:crtime  738258.371598184
nfssrv:3:client:snaptime        739802.535190414
#
#5

Updated by Marcel Telka over 3 years ago

Detailed NFS server stats

Implementation

The detailed NFS server stats are implemented for all supported versions of both NFS_ACL and NFS protocols. There are four types of detailed kstats implemented:

Type Module Instance Name
per NFS server (global) stats nfssrv 0 PROTOVER / PROC
per-share stats SHARE_ID
per-client stats nfssrv_client CLIENT_ID 0
per-client/per-share stats SHARE_ID

Where PROTOVER is one of NFS_ACLv2, NFS_ACLv3, NFSv2, NFSv3, or NFSv4.0 and PROC is the lowercased procedure (operation) name for the particular protocol and its version.

For every type of detailed kstats there is a collection of several kstats implemented; one kstat for every protocol procedure (and/or operation, in case of NFSv4.0). In the collection the kstat's Module and Instance are same. The protcol name, protocol version, and procedure (operation) name is encoded into the kstat's Name.

The kstat's Class depends on the protocol name and protocol version:

Protocol Class
NFS_ACLv2 aclprocio_v2
NFS_ACLv3 aclprocio_v3
NFSv2 rfsprocio_v2
NFSv3 rfsprocio_v3
NFSv4.0 rfsprocio_v4.0

All of the detailed NFS server stats are I/O kstats. See kstat_io(9s) and kstat_queue(9f). For every NFS (or NFS_ACL) procedure/operation one I/O kstat is created. For kstat_io(9s), the "writes" are transfers from NFS client to the NFS server (the NFS client "writes" the requests to the server), the "reads" are transfers from the NFS server to the NFS client (the NFS client "reads" the responses from the server). For kstat_queue(9f), only runq is used.

Usage example

Detailed stats for NFSv4.0 GETATTR operation for client 10.0.100.2 (CLIENT_ID is 3) accesing the share /export (SHARE_ID is 2):

# kstat -p nfssrv_client3:2:NFSv4.0/getattr
nfssrv_client3:2:NFSv4.0/getattr:class  rfsprocio_v4.0
nfssrv_client3:2:NFSv4.0/getattr:crtime 178013.864730799
nfssrv_client3:2:NFSv4.0/getattr:nread  1052
nfssrv_client3:2:NFSv4.0/getattr:nwritten       96
nfssrv_client3:2:NFSv4.0/getattr:rcnt   0
nfssrv_client3:2:NFSv4.0/getattr:reads  6
nfssrv_client3:2:NFSv4.0/getattr:rlastupdate    178016300802351
nfssrv_client3:2:NFSv4.0/getattr:rlentime       233789
nfssrv_client3:2:NFSv4.0/getattr:rtime  233789
nfssrv_client3:2:NFSv4.0/getattr:snaptime       189990.389440318
nfssrv_client3:2:NFSv4.0/getattr:wcnt   0
nfssrv_client3:2:NFSv4.0/getattr:wlastupdate    0
nfssrv_client3:2:NFSv4.0/getattr:wlentime       0
nfssrv_client3:2:NFSv4.0/getattr:writes 6
nfssrv_client3:2:NFSv4.0/getattr:wtime  0
#

We see 6 GETATTR requests arrived (writes). Total size of the GETATTR requests was 96 bytes (nwritten). The NFS server sent back 6 GETATTR replies (reads). Total size of the GETATTR replies was 1052 bytes (nread). The total time spent in the NFS server to handle the GETATTR operations was 233789 nanoseconds (rlentime).

#6

Updated by Marcel Telka over 3 years ago

  • Related to Bug #6747: xdr_READDIR4res() bypass the XDR mblk API added
#7

Updated by Marcel Telka over 3 years ago

  • Related to Bug #6697: Aggregated IOPS, bandwidth, and latency kstats for NFS server added
#9

Updated by Garrett D'Amore about 3 years ago

Out of band, Nexenta leadership have indicated that

a) they don't approve this integration for illumos
b) Nexenta doesn't use this code themselves and have chosen to take an entirely different, DTrace-based, approach
c) Marcel is no longer a Nexenta employee.
d) the "internal" review was not "successful" inside Nexenta -- and that the reviewers listed may not represent the full set of reviewers, not all of whom had their criticisms or feedback addressed.

Further, I am personally almost absolutely certain one of the reviewers is unqualified to provide meaningful review feedback about the kernel NFS subsystem.

Additionally, I'd like to know who the "we" in description is -- it is not Nexenta. "We" (the RTI advocates) understand that the other primary NFS user, Delphix, also has no request for this functionality.

Architecturally, one must understand that many folks assume it is "safe" to take a system-wide snapshot up to once per second. (The wisdom of this may be debatable -- nonetheless it does represent a fairly common practice.) As such, some sensitivity to dramatic increases in the cost of taking the snapshot, as well as the cost of transporting the snapshot across the kernel/user boundary (which is marked if the snapshot size grows) should be considered.

I am therefore calling into question whether there exists a real business need for this functionality; I love using snapshots to instrument subsystems, but a per-client set of kstats has potentially rather explosive growth, as the number of NFS clients on a busy NFS server can be extreme.

Hence, architecturally, I believe this implementation to be a bad choice -- a dead end that should almost certainly not be enabled by default, and of dubious merit in any event. I would welcome hearing more detail from businesses or users with a genuine need for this functionality.

#10

Updated by Marcel Telka about 3 years ago

  • Assignee deleted (Marcel Telka)

Some clarification:

Re a) I'm not sure I understand this. Does it mean that all illumos integrations needs to be approved by Nexenta leadership? If so, then please let us know who the Nexenta leadership is. Is it their CEO, or CTO, or somebody else? Thanks.

Re b) I do not know what Nexenta uses, or does not use (and nobody outside Nexenta can be sure since Nexenta stopped to update their master branch of illumos-nexenta at github one year ago). I'm not trying to push hard to integrate #6696 to illumos. Actually, I never tried to RTI this and I won't try to RTI this without some request from somebody (community, some corporation, or so). I pasted the link to the webrev just for the reference in a case somebody is looking for the code in future.

Re c) I was never a Nexenta employee.

Re d) I do not remember any objections, criticism or unaddressed feedback during the "internal" review at Nexenta (but it is easily possible my memory is weak; it was about a half year ago). The only concern was the number of kstats and the slow down in kstat_chain_update(3KSTAT) caused by that. This concern didn't appeared during the #6696 review, but as a result of the #6090 implementation. The #6696 and #6697 was (partially) implemented to address such concerns. The #6696 does that by disabling kstats by default (and capturing the statistical data only). The #6696 implementation should have almost zero impact to the performance. The #6697 publishes very limited subset of the statistical data as kstats trying to provide only the really needed data to users.

#11

Updated by Garrett D'Amore about 3 years ago

By "a", I meant that the change doesn't have their blessing, nor are they requesting it. This is particularly salient, as the work carries a Nexenta copyright, and was originally posted as if it were a Nexenta sponsored change (or at least it could appear to be so.) The actual IP is owned by Nexenta, and I'm not entirely sure it had previously been legally published with their consent. But setting aside that question, they don't want it to exist in the tree. As they are the ones who funded the work originally, and presumably had the original business case for it, its very salient that they don't want it anymore.

It doesn't matter who the individual who reported this to me is; that person has asked to avoid having their name issued here, which is fine.

On "c", sorry, I thought you were working for Nexenta. I have since been made to understand you were a contractor; in any event the work was originally done for Nexenta, and the IP belongs to them (which is reflected in the copyright). I'm given to understand that whatever the relationship between you and Nexenta was, it no longer exists. (For the record, I find that presenting work carrying a copyright without approval of the copyright holder to be rather strange and impolite, even if this is technically legal -- and I think that unless the work was previously published externally under an open source license it would in most jurisdictions not be legal. That said IANAL, and have zero familiarity with the laws in your particular jurisdiction.)

Ultimately these issues are irrelevant.

Without a business case justifying the risk, I think this particular change should be abandoned. The two main NFS commercial contributors are not asking for it, and in fact have requested the opposite. That is significant data.

I am going to close this issue; if anyone actually wants this to integrate because of a real business case for it, please feel free to reopen with appropriate justification.

#12

Updated by Garrett D'Amore about 3 years ago

  • Status changed from In Progress to Rejected

Reject as "will not integrate due to architectural issues."

Also available in: Atom PDF