Project

General

Profile

Actions

Bug #874

open

beadm's space accounting is deeply dubious

Added by Rich Lowe about 12 years ago. Updated almost 11 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
cmd - userland programs
Start date:
2011-03-31
Due date:
% Done:

0%

Estimated time:
Difficulty:
Medium
Tags:
needs-triage
Gerrit CR:
External Bug:

Description

beadm's "Space" field appears to charge space in a straight forward but misleading manner (I'm guessing based on its behaviour for me)

- A BE is charged with the space consumed by itself and all descendent datasets.

This is misleading, because clone promotion means that the currently active BE is charged with the snapshot backing every other BE. So, my currently active BE is "34GB", but most others are around "200M".

It seems like it'd be more natural to charge a BE for the space used by its dataset and its origin snapshot.

That is something like this hacky script:

#!/bin/ksh

DSET=$1

function prop {
    zfs get -Hpo value $1 $2
}

origin=$(prop origin $1)

if [[ $origin == "-" ]]; then
    prop usedbydataset $DSET
else
    expr $(prop usedbydataset $DSET) + $(prop used $origin)
fi

Which charges my currently active BE with 6G and a random inactive BE with 1G, and seems generally more accurate calculation for how much disk space a given BE is actually causing to be consumed. That is, the amount of space which destroying the BE would free. ... that is, assuming I have my logic regarding ZFS space accounting correct.

Actions #1

Updated by Rich Lowe about 12 years ago

This should also charge a BE for any associated Zone boot environments.

Such that each BE is charged with:
usedbydataset + used-of-origin + usedbydataset-of-each-zbe + used-of-each-zbe-origin

Albert pointed out that it is possible to have BEs share origin snapshots, and suggested that "Space" be two columns, a total and the total of origins. I'm less wedded to the specifics in the shared-snapshot case than I am in (I hope) improving the current common case radically.

Actions #2

Updated by Rich Lowe almost 11 years ago

  • Difficulty set to Medium
  • Tags set to needs-triage

The logical way to implement this is to use the data behind 'zfs destroy -n'. Thus each BE would be charged with the space which would be freed were that BE to be destroyed.

Actions

Also available in: Atom PDF