Project

General

Profile

Actions

Bug #13094

closed

systems have more kmem caches than they used to

Added by Joshua M. Clulow about 1 year ago. Updated about 1 year ago.

Status:
Closed
Priority:
Normal
Category:
kernel
Start date:
Due date:
% Done:

100%

Estimated time:
Difficulty:
Medium
Tags:
Gerrit CR:

Description

The kmem_taskq is used for various house keeping functions within the kernel memory allocator. One of the most important tasks it can perform is kmem_reap(), where a task is dispatched for each kmem cache that exists on the system.

The taskq_dispatch_ent() routine, which allows the use of a taskq entry embedded within a larger structure, did not yet exist (as far as I know) when this facility was designed. As such, the kmem taskq is using TASKQ_PREPOPULATE with 300 entries that get allocated at system boot and are always available. Over time, we have increased the number of kmem caches in the system. Looking at a current illumos system which has undergone a reap, we can see that the maximum queue depth has been as high as 485:

> ::taskq -n kmem
ADDR             NAME                             ACT/THDS Q'ED  MAXQ INST
fffffe2ce0619d88 kmem_move_taskq                    0/   1    0 211820    0
fffffe2ce0619c68 kmem_taskq                         0/   1    0   485    0

Reaping is most critical to complete when the system is almost out of memory. Unfortunately, it seems that this means we might at times not be able to allocate taskq entries for at least a hundred kmem caches that exist.

The lowest risk change here seems to be to bump the prepopulation count to something like 600 entries. At 56 bytes a piece, this is only another 16KB or so. A more invasive change would move to using taskq entries that are included within the kmem_cache_t itself, but for now simply bumping the count seems expedient.

Actions

Also available in: Atom PDF