Project

General

Profile

Actions

Bug #3852

open

Fix inaccurate arcstat_l2_hdr_size calculations

Added by Ying Zhu almost 9 years ago.

Status:
New
Priority:
Normal
Assignee:
Category:
zfs - Zettabyte File System
Start date:
2013-06-29
Due date:
% Done:

50%

Estimated time:
Difficulty:
Medium
Tags:
zfs, l2arc, stats
Gerrit CR:

Description

This bug was firstly found on the Linux port of ZFS namely ZFS on Linux.
Based on the comments in arc.c we know that buffers can exist both in arc
and l2arc, under this circumstance both arc_buf_hdr_t and l2arc_buf_hdr_t
will be allocated. However the current logic only cares for memory that
l2arc_buf_hdr takes up when the buffer's state transfers from or to
arc_l2c_only. This will cause obvious deviations for illumos's zfs version
since the sizeof(l2arc_buf_hdr) is larger than ZOL's. We can implement
the calcuation in the following simple way:
1. When allocate a l2arc_buf_hdr_t we add its memory consumption instantly
and subtract it when we free or evict the l2arc buf.
2. According to the code in l2arc_hdr_stat_add and l2arc_hdr_stat_remove,
if the buffer only stays in l2arc we should also add the memory its arc_buf_hdr_t
consumes, so we only need to add HDR_SIZE to arcstat_l2_hdr_size since we already
concerned with L2HDR_SIZE in step 1 and the same for transfering arc bufs from
l2arc only state.
Details are in the attached patch.

The testbox has 2 4-core Intel Xeon CPUs(2.13GHz), with 16GB memory, OS is
Linux and tests were
set upped in the following way:
1. Fdisked a SATA disk into two partitions, one partition for zpool storage and
the other one was used as the cache device.
2. Generated some files occupying 14GB altogether in the zpool prepared in step 1
using iozone.
3. Read them all using md5sum and watched the l2arc related statistics in
/proc/spl/kstat/zfs/arcstats. After the reading ended the l2_hdr_size and l2_size
were shown like this:
l2_size 4 4403780608
l2_hdr_size 4 0
which was weird.
4. After applying the patch in the attachments and reran step 1-3, the results were as following:
l2_size 4 4306443264
l2_hdr_size 4 535600
this numbers made sense, on 64-bit systems the sizeof(l2arc_buf_hdr_t)(On ZOL) is 16 bytes.
Assue all blocks cached by l2arc are 128KB, so 535600/16*128*1024=4387635200, since
not all blocks are equal-sized, the theoretical result will be a little bigger, as
we can see.
Since I'm familiar with systemtap instrumentation tool(dtrace for linux) I used it to
examine what had happened. The script looked like this:
probe module("zfs").function("arc_chage_state") {
if ($new_state == $arc_l2_only)
printf("change arc buf to arc_l2_only\\n")
}
It will print out some information each time we call funciton arc_chage_state if
the argument new_state is arc_l2_only.
I gathered the trace logs and found that none of the arc bufs ran into arc state
arc_l2_only when the tests was running, this was the reason why l2_hdr_size in
step 3 was 0. The arc bufs fell into arc_l2_only when the pool or the filesystem
was offlined.


Files

No data to display

Actions

Also available in: Atom PDF