Message ID | 20230605235005.20649-1-ankita@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | Expose GPU memory as coherently CPU accessible | expand |
Hello Ankit, On 6/6/23 01:50, ankita@nvidia.com wrote: > From: Ankit Agrawal <ankita@nvidia.com> > > NVIDIA is building systems which allows the CPU to coherently access > GPU memory. This GPU device memory can be added and managed by the > kernel memory manager. The patch holds the required changes in QEMU > to expose this memory to the device assigned VMs. > > The GPU device memory region is exposed as device BAR1 and QEMU mmaps to > it. It then adds new proximity domains to represent the memory in the > VM ACPI SRAT. This allows the device memory to be added as separate NUMA > nodes inside the VM. The proximity domains (PXM) are passed to the VM > using ACPI DSD properties to help VM kernel modules add the memory. > > Current Linux cannot create NUMA nodes on the fly, hence creating enough > NUMA nodes in ACPI is needed so that they are available at the VM bootup > time. The physical platform firwmare provides 8 NUMA nodes, which QEMU > is emulating here. > > A new vfio-pci variant driver is added to manage the device memory and > report as a BAR. Ongoing review of the corresponding kernel side changes > along with the new vfio-pci variant driver. > Ref: https://lore.kernel.org/lkml/20230405180134.16932-1-ankita@nvidia.com/ > > Applied over v8.0.2. > > Ankit Agrawal (4): > qemu: add GPU memory information as object > qemu: patch guest SRAT for GPU memory > qemu: patch guest DSDT for GPU memory > qemu: adjust queried bar size to power-of-2 Please use "vfio:" subject prefix when modifying the hw/vfio files. If you are not sure and want to know what is the current practice, simply run : git log --pretty=oneline <files> Also, to know who to send the series, please use : ./scripts/get_maintainer.pl <patches> Thanks, C. > > hw/arm/virt-acpi-build.c | 54 ++++++++++++++++++++++++++++ > hw/pci-host/gpex-acpi.c | 71 ++++++++++++++++++++++++++++++++++++ > hw/vfio/common.c | 2 +- > hw/vfio/pci-quirks.c | 13 +++++++ > hw/vfio/pci.c | 72 +++++++++++++++++++++++++++++++++++++ > hw/vfio/pci.h | 1 + > include/hw/pci/pci_device.h | 3 ++ > 7 files changed, 215 insertions(+), 1 deletion(-) >
From: Ankit Agrawal <ankita@nvidia.com> NVIDIA is building systems which allows the CPU to coherently access GPU memory. This GPU device memory can be added and managed by the kernel memory manager. The patch holds the required changes in QEMU to expose this memory to the device assigned VMs. The GPU device memory region is exposed as device BAR1 and QEMU mmaps to it. It then adds new proximity domains to represent the memory in the VM ACPI SRAT. This allows the device memory to be added as separate NUMA nodes inside the VM. The proximity domains (PXM) are passed to the VM using ACPI DSD properties to help VM kernel modules add the memory. Current Linux cannot create NUMA nodes on the fly, hence creating enough NUMA nodes in ACPI is needed so that they are available at the VM bootup time. The physical platform firwmare provides 8 NUMA nodes, which QEMU is emulating here. A new vfio-pci variant driver is added to manage the device memory and report as a BAR. Ongoing review of the corresponding kernel side changes along with the new vfio-pci variant driver. Ref: https://lore.kernel.org/lkml/20230405180134.16932-1-ankita@nvidia.com/ Applied over v8.0.2. Ankit Agrawal (4): qemu: add GPU memory information as object qemu: patch guest SRAT for GPU memory qemu: patch guest DSDT for GPU memory qemu: adjust queried bar size to power-of-2 hw/arm/virt-acpi-build.c | 54 ++++++++++++++++++++++++++++ hw/pci-host/gpex-acpi.c | 71 ++++++++++++++++++++++++++++++++++++ hw/vfio/common.c | 2 +- hw/vfio/pci-quirks.c | 13 +++++++ hw/vfio/pci.c | 72 +++++++++++++++++++++++++++++++++++++ hw/vfio/pci.h | 1 + include/hw/pci/pci_device.h | 3 ++ 7 files changed, 215 insertions(+), 1 deletion(-)