Feature #11493

aggr needs support for multiple pseudo rx groups

Added by Robert Mustacchi over 1 year ago. Updated 11 months ago.

Start date:
Due date:
% Done:


Estimated time:
Gerrit CR:


We need multiple pseudo Rx groups for aggr in order to better utilize the underlying hardware.

MAC groups are an abstraction used to group and program hardware rings. Groups are where we place unicast and VLAN filters. The more groups a MAC provider supports, the more MAC clients (e.g. VNICs) it can hardware accelerate through the use of hardware filtering. This hardware filtering relieves the MAC framework from performing software classification and allows the SRS to poll the MAC's hardware rings. In effect, network performance is better when a client has a reserved MAC group with hardware filtering.

An aggr is both a MAC client and a MAC provider. It's a client to the underlying MAC/NIC (aggr_port_t) and a provider of the aggregation (aggr_grp_t). As a provider it must support the groups abstraction to allow clients of the aggr to make use of hardware classification. It does this by creating a "pseudo group": an abstraction that combines 1 hardware group from each port. E.g., if we have two ixgbe ports with groups of 4 rings, then 1 pseudo group will contain a mapping to one group on each ixgbe port, and contain 8 pseudo rings.

The problem is that aggr currently only creates one pseudo group – no matter what the underlying hardware may provide. So, once again, if we aggregate two ixgbe NICs, each with 32 groups, the aggr will only make use of one group and the other 31 will go to waste. The upshot of this is that an aggr can provide hardware classification for only one client. The moment there are two or more clients all traffic going over the aggr must be software classified and performance improvements like polling mode are lost.

The purpose of this ticket is to track the work of adding multiple Rx pseudo group support to aggr.

Testing notes:

On my local workstation running a DEBUG kernel I ran the following tests. Each of these tests was run in 3 scenarios: 1) two ixgbe aggrs with active LACP and default (1500) MTU, 2) two ixgbe aggrs with active LACP and 9000 MTU, 3) mixed aggr of ixgbe/igb with LACP off.

  • Plumb IP on the primary aggr client and make sure it can still send/receive traffic. This traffic should come over the HW lanes but I introduced a regression there where the primary MAC client gets placed on the default group and gets SW classified. This doesn't affect the correctness of the program, just potentially its performance. I think its okay to fix this in a follow up ticket because in triton we are never going to pass traffic on the primary MAC. It will always be VLAN VNICs on the aggr/overlay.
  • Same as above but on a VLAN on the primary MAC client (aka dladm create-vlan).
  • Create a VNIC on the aggr, plumb IP on it, and verify traffic is received via HW lanes.
  • Create a VLAN VNIC on the aggr, plumb IP on it, and verify traffic is received via HW lanes.
  • Create two VNICs on the aggr, plumb IP on both, and verify traffic is received via HW lanes for both (this asserts that the pseudo groups are working and that each client gets its own group).
  • Same as above but for VLAN VNICs.
  • Perform variations of the previous tests, but with more VNICs, that exercise the VLAN ref counting mechanisms in MAC/ixgbe. It's important that if multiple clients exist on a VLAN that we keep the HW filters in place until the very last client using that VLAN is deleted.
  • Run various non-aggr tests to make sure that the non-aggr Rx path still works.

I also ran a test where I would continously ping for a bogus IP to generate L2 multicast traffic. I would do this while booting a server with aggrs and make sure that the broadcast traffic didn't interfere with the creation of the aggr like we saw in OS-6697.

Finally, I ran several tests on a JPC CN. I booted a JPC CN on these aggr bits and then created two VMs: one KVM (KVM_A), one SmartOS container (SOS_A). I put both VMs on the external and fabric (overlay) networks. I then created a SmartOS container (SOS_B) on another CN. Then I ran the following tests:

  • iperf3 from SOS_B to KVM_A over external
  • iperf3 from SOS_B to KVM_A over fabric
  • iperf3 from SOS_B to SOS_A over external
  • iperf3 from SOS_B to SOS_A over fabric
  • iperf3 from SOS_A to KVM_A over fabric (this exercises the MAC-loopback Tx path)
  • iperf3 from SOS_A to KVM_A over external (this exercises the MAC-loopback Tx path)


mac_test (61.9 KB) mac_test Ryan Zezeski, 2020-03-03 05:18 AM

Updated by Robert Mustacchi over 1 year ago

This also includes the fixes for illumos-joyent 'OS-6872 mac deadlock in aggrs'. We didn't want to integrate a deadlock this introduced, so we've squashed that in as part of the upstreaming effort.


Updated by Ryan Zezeski 11 months ago

I tested this using a script (mac_test) that I developed at Joyent alongside the original patch. I updated the script to be a bit cleaner and also added new tests. The basic idea of this script is to create various configurations of VNICs/VLANs/aggrs and verify that a) traffic is passed and b) traffic is passed on the correct path (hardware vs software lanes). It's basically an indirect way of testing mac/aggr/ixgbe via pure userland manipulations.

Here is the output when running the script against this change (as well as the other issues linked to this one).

$ ssh thunderhead
The Illumos Project     SunOS 5.11      aggr-vlan-upstream-merged-0-g839a17934f Feb. 24, 2020
SunOS Internal Development: rpz 2020-Feb-24 [illumos-gate]
rpz@thunderhead:~$ pfexec ./mac_test ixgbe0 ixgbe1 ixgbe2 ixgbe3
PASS [test_generic_create_vnic<ixgbe0>]
PASS [test_generic_create_vlan<ixgbe0>]
PASS [test_generic_vnic_hw_rx<ixgbe2>]
PASS [test_generic_vlan_hw_rx<ixgbe2>]
PASS [test_generic_sw_to_hw<ixgbe2>]
PASS [test_generic_promisc<ixgbe2>]
PASS [test_generic_promisc_vlan<ixgbe2>]
PASS [test_generic_vlan_all_groups<ixgbe2>]
PASS [test_generic_vlan_steal_group<ixgbe2>]
PASS [test_generic_vlan_shared_addr<ixgbe2>]
PASS [test_generic_two_vlan_on_default<ixgbe2>]
PASS [test_ixgbe_many_vlans<ixgbe2>]
PASS [test_ixgbe_vfta_repeated_vid<ixgbe2>]
PASS [test_ixgbe_missing_default<ixgbe2>]
PASS [test_etherstub_vlan]
PASS [test_generic_aggr_primary<aggr_recv1>]
PASS [test_generic_aggr_primary_vlan<aggr_recv1>]
PASS [test_generic_aggr_vnic<aggr_recv1>]
PASS [test_generic_aggr_vlan<aggr_recv1>]
PASS [test_generic_aggr_two_vnics<aggr_recv1>]
PASS [test_generic_aggr_two_vlans<aggr_recv1>]
PASS [test_generic_aggr_vlans_shared_addr]
PASS [test_generic_aggr_ref_count<aggr_recv1>]
PASS [test_generic_aggr_vlan_ref_count<aggr_recv1>]
PASS [test_generic_aggr_vnic_and_vlan<aggr_recv1>]
PASS [test_generic_aggr_steal_group<aggr_recv1>]
PASS [test_generic_aggr_vlan_steal_group<aggr_recv1>]
PASS [test_aggr_ixgbe_promisc<aggr_recv1>]
PASS [test_aggr_vlan_ixgbe_promisc]

The script can be found at I also attached it to this ticket.


Updated by Electric Monk 11 months ago

  • Status changed from New to Closed

git commit 45948e49c407e4fc264fdd289ed632d6639e009d

commit  45948e49c407e4fc264fdd289ed632d6639e009d
Author: Ryan Zezeski <>
Date:   2020-03-02T14:43:17.000Z

    11493 aggr needs support for multiple pseudo rx groups
    Portions contributed by: Dan McDonald <>
    Reviewed by: Patrick Mooney <>
    Reviewed by: Jerry Jelinek <>
    Reviewed by: Robert Mustacchi <>
    Reviewed by: Paul Winder <>
    Approved by: Gordon Ross <>

Also available in: Atom PDF