Project

General

Profile

Bug #12279

::arc_compression_stats generates errors

Added by Jason King 16 days ago. Updated 7 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
mdb - modular debugger
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:

Description

Running ::arc_compression_stats, it appears to partially fail:

> ::arc_compression_stats
mdb: couldn't read arc_buf_hdr_t from ffffff0712c3cec0
Histogram of all compressed buffers.
Each bucket represents buffers of size: [2^(n-1)*512, 2^n*512).
  1:     11 **
  2:     33 ******
  3:     88 ***************
  4:     85 ***************
  5:    238 ****************************************
  6:     62 ***********
  7:     36 *******
  8:     57 **********

Histogram of all uncompressed buffers.
Each bucket represents buffers of size: [2^(n-1)*512, 2^n*512).
  1:     11 ***
  2:     33 ********
  3:     88 *********************
  4:     85 ********************
  5:     61 ***************
  6:    170 ****************************************
  7:     24 ******
  8:     17 ****
  9:    121 *****************************

Related issues

Related to illumos gate - Bug #12028: zfs test mdb_001_pos can failNew

Actions

History

#1

Updated by Jason King 16 days ago

  • Related to Bug #12028: zfs test mdb_001_pos can fail added
#2

Updated by Jason King 7 days ago

The failures are due to the way zfs encryption modified arc_buf_hdr_t. mdb_ctf_vread() assumes that it can always read at least sizeof(target_type) bytes from the target. In the post-zfs encryption world, if the arc buf is unencrypted, the actual allocated size of the arc_buf_hdr_t is less than sizeof (arc_buf_hdr_t) -- the actual size is sizeof (arc_buf_hdr_t) - sizeof (arc_buf_hdr_crypt_t). If the memory where the b_crypt_hdr field resides isn't valid, the mdb_ctf_vread() call will fail.

Also available in: Atom PDF