Feature #14450


Want PCI platform resource discovery module

Added by Robert Mustacchi 7 months ago. Updated 6 months ago.

driver - device drivers
Start date:
Due date:
% Done:


Estimated time:
Gerrit CR:


As part of the ongoing effort around IPD 21 PCI Platform Unification, the first major step here is to get a working set of factored out platform resource discovery functions. This provides an initial pass on this in sys/plat/pci_prd.h, refactors i86pc to use it, allowing us to drop acpica as a dependent module on a few things, focusing that instead in the platform-specific prd module. In turn, I have gone through and also prototyped this against a different x86 based platform that does not use ACPI or other bits and so far have managed to get both worlds to co-exist with a single set of pci_autoconfig and related modules.

For a full discussion of the actual endpoints and APIs a platform would nominally implement, it's best to see the uts/common/sys/plat/pci_prd.h file that is associated with this.

In addition, I have begun testing and verified that this has resulted in no changes to the actual enumerated devices, the assigned addresses, and related across some initial platforms. This will be augmented with the full set once we get through a bit more review and community testing.

Actions #1

Updated by Electric Monk 7 months ago

  • Gerrit CR set to 1981
Actions #2

Updated by Robert Mustacchi 6 months ago

Testing this is pretty important, so the primary approach I took was tracking down several different classes of systems that had different types of firmware and had things like the PCI BIOS IRQ routing tables present (even on modern systems). With each system in question, I went through and grabbed prtconf -v and then a full dump of all pci devices via pcieadm save-cfgspace -a. With a before and after I looked at the following:

  • Did the set of properties and addresses assigned to devices change at all? Our expectation here as this is mostly shuffling is that it should not.
  • Did we end up finding and configuring the same set of PCI devices and does their configuration space look similar.

Thanks to others I ended up looking at several different systems:

  • An AMD Rome based server which notably had a PCI IRQ routing table
  • A qemu based virtual machine running on Linux/KVM based on a Kaby Lake Intel laptop, which mostly only has PCI
  • A Westmere based server system
  • An Ivy Bridge server based system
  • An Ivy Brige client based system, which predominately only has PCI devices and includes pci-ide which has some special boot paths in the code
  • A Skylake-E system (Silver 4110) that was a bit of a workstation build.

In all the different systems, the only difference I found was that the order of two properties on the devices tree had changed. That is, they had the same values as previously, but now acpi-namespace comes after slot-names. All in all this is a pretty good result. Even after this all of the actual device enumeration is identical and we haven't really rejiggered anything. Even the assignments of configuration space and BARs is the same across this, which is good. That's what we'd expect.

Actions #3

Updated by Electric Monk 6 months ago

  • Status changed from New to Closed
  • % Done changed from 0 to 100

git commit cd0d4b4073e62fa22997078b1595f399434a1047

commit  cd0d4b4073e62fa22997078b1595f399434a1047
Author: Robert Mustacchi <>
Date:   2022-02-17T19:50:22.000Z

    14450 Want PCI platform resource discovery module
    Reviewed by: Rich Lowe <>
    Reviewed by: Patrick Mooney <>
    Reviewed by: Andy Fiddaman <>
    Approved by: Dan McDonald <>


Also available in: Atom PDF