Project

General

Profile

Actions

Bug #6927

open

Bias in zvol's space account violate the semantic feature of volume

Added by LiXin Ge over 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Start date:
2016-04-21
Due date:
2016-05-06 (over 6 years late)
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
needs-triage
Gerrit CR:

Description

When we create a volume and set space reservation for it , we insinuate the user that they can absolutely use the space we announced in 'volsize'. Actaully, bias in zvol's space account violate the semantic feature of volume.

This will always happen in raidz pool, the worst case is raidz2 with 9 disks and let's take the worst case for a example.

Zvol's default blocksize is 8192, when we going to write a block we account the asize via 'vdev_psize_to_asize'. In this case, the psize is 8192 and the asize is 12288. The asize will set to the dva in the blkptr of our block. After the calling of spa_sync we come to the 'dbuf_write_done', which account the dataset's new space consume via 'bp_get_dsize_sync'. In this case , the vdev_deflate_ratio is 397(computed by 128k blocksize) and our dsize is 9528, which is much bigger(116%) than our logical space use 8192. The space reservation for volume, which only consider the additional space consume of metadata, is not enough, obviously.

Generally speaking, the space account of volume may much bigger than it's actaully used, that's because of volume default block size isn't the one we compute our vdev_deflate_ratio. When the ZFS user's main requirements is volume, that's become a big problem: all of the pool space is devided into several volumes(almost no space left), but none of my volumes can be used normally(it can't get any extra space than it's reservation while it's reservation isn't enough).

Our term get a list of the biases in some typical situation, this issue isn't difficult to fix and we use an idle bit in blkptr_t to solve the compatibility.

*********************************************************************************************
raidz1 3 disks, ASIZE is 12288, vdev_deflate_ratio IS 341/512, dsize is 8184(lsize:8192).
raidz1 4 disks, ASIZE is 11264, vdev_deflate_ratio IS 383/512, dsize is 8426(lsize:8192).
raidz1 5 disks, ASIZE is 10240, vdev_deflate_ratio IS 409/512, dsize is 8180(lsize:8192).
raidz1 6 disks, ASIZE is 10240, vdev_deflate_ratio IS 425/512, dsize is 8500(lsize:8192).
raidz1 7 disks, ASIZE is 10240, vdev_deflate_ratio IS 436/512, dsize is 8720(lsize:8192).
raidz1 8 disks, ASIZE is 10240, vdev_deflate_ratio IS 445/512, dsize is 8900(lsize:8192).
raidz1 9 disks, ASIZE is 9216, vdev_deflate_ratio IS 455/512, dsize is 8190(lsize:8192).
raidz1 10 disks, ASIZE is 9216, vdev_deflate_ratio IS 458/512, dsize is 8244(lsize:8192).
raidz1 11 disks, ASIZE is 9216, vdev_deflate_ratio IS 464/512, dsize is 8352(lsize:8192).
raidz1 12 disks, ASIZE is 9216, vdev_deflate_ratio IS 468/512, dsize is 8424(lsize:8192).
raidz1 13 disks, ASIZE is 9216, vdev_deflate_ratio IS 471/512, dsize is 8478(lsize:8192).
raidz1 14 disks, ASIZE is 9216, vdev_deflate_ratio IS 474/512, dsize is 8532(lsize:8192).
raidz1 15 disks, ASIZE is 9216, vdev_deflate_ratio IS 474/512, dsize is 8532(lsize:8192).
raidz1 16 disks, ASIZE is 9216, vdev_deflate_ratio IS 478/512, dsize is 8604(lsize:8192).
raidz1 17 disks, ASIZE is 9216, vdev_deflate_ratio IS 481/512, dsize is 8658(lsize:8192).
raidz1 18 disks, ASIZE is 9216, vdev_deflate_ratio IS 481/512, dsize is 8658(lsize:8192).
raidz1 19 disks, ASIZE is 9216, vdev_deflate_ratio IS 481/512, dsize is 8658(lsize:8192).
raidz1 20 disks, ASIZE is 9216, vdev_deflate_ratio IS 485/512, dsize is 8730(lsize:8192).

*********************************************************************************************
raidz2 4 disks, ASIZE is 16896, vdev_deflate_ratio IS 255/512, dsize is 8415(lsize:8192).
raidz2 5 disks, ASIZE is 15360, vdev_deflate_ratio IS 305/512, dsize is 9150(lsize:8192).
raidz2 6 disks, ASIZE is 12288, vdev_deflate_ratio IS 341/512, dsize is 8184(lsize:8192).
raidz2 7 disks, ASIZE is 12288, vdev_deflate_ratio IS 364/512, dsize is 8736(lsize:8192).
raidz2 8 disks, ASIZE is 12288, vdev_deflate_ratio IS 383/512, dsize is 9192(lsize:8192).
raidz2 9 disks, ASIZE is 12288, vdev_deflate_ratio IS 397/512, dsize is 9528(lsize:8192).
raidz2 10 disks, ASIZE is 10752, vdev_deflate_ratio IS 408/512, dsize is 8568(lsize:8192).
raidz2 11 disks, ASIZE is 10752, vdev_deflate_ratio IS 416/512, dsize is 8736(lsize:8192).
raidz2 12 disks, ASIZE is 10752, vdev_deflate_ratio IS 424/512, dsize is 8904(lsize:8192).
raidz2 13 disks, ASIZE is 10752, vdev_deflate_ratio IS 428/512, dsize is 8988(lsize:8192).
raidz2 14 disks, ASIZE is 10752, vdev_deflate_ratio IS 436/512, dsize is 9156(lsize:8192).
raidz2 15 disks, ASIZE is 10752, vdev_deflate_ratio IS 441/512, dsize is 9261(lsize:8192).
raidz2 16 disks, ASIZE is 10752, vdev_deflate_ratio IS 445/512, dsize is 9345(lsize:8192).
raidz2 17 disks, ASIZE is 10752, vdev_deflate_ratio IS 445/512, dsize is 9345(lsize:8192).
raidz2 18 disks, ASIZE is 9216, vdev_deflate_ratio IS 455/512, dsize is 8190(lsize:8192).
raidz2 19 disks, ASIZE is 9216, vdev_deflate_ratio IS 455/512, dsize is 8190(lsize:8192).
raidz2 20 disks, ASIZE is 9216, vdev_deflate_ratio IS 455/512, dsize is 8190(lsize:8192).

No data to display

Actions

Also available in: Atom PDF