Message ID | 20250123174854.3338-4-ankita@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | vfio/nvgrace-gpu: Enable grace blackwell boards | expand |
On Thu, 23 Jan 2025 17:48:54 +0000 <ankita@nvidia.com> wrote: > From: Ankit Agrawal <ankita@nvidia.com> > > In contrast to Grace Hopper systems, the HBM training has been moved > out of the UEFI on the Grace Blackwell systems. This reduces the system > bootup time significantly. > > The onus of checking whether the HBM training has completed thus falls > on the module. > > The HBM training status can be determined from a BAR0 register. > Similarly, another BAR0 register exposes the status of the CPU-GPU > chip-to-chip (C2C) cache coherent interconnect. > > Based on testing, 30s is determined to be sufficient to ensure > initialization completion on all the Grace based systems. Thus poll > these register and check for 30s. If the HBM training is not complete > or if the C2C link is not ready, fail the probe. > > While the time is not required on Grace Hopper systems, it is > beneficial to make the check to ensure the device is in an > expected state. Hence keeping it generalized to both the generations. > > Ensure that the BAR0 is enabled before accessing the registers. > > CC: Alex Williamson <alex.williamson@redhat.com> > CC: Kevin Tian <kevin.tian@intel.com> > CC: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Ankit Agrawal <ankita@nvidia.com> > --- > drivers/vfio/pci/nvgrace-gpu/main.c | 72 +++++++++++++++++++++++++++++ > 1 file changed, 72 insertions(+) > > diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c > index f4f23c0c95c7..fc480ea32c11 100644 > --- a/drivers/vfio/pci/nvgrace-gpu/main.c > +++ b/drivers/vfio/pci/nvgrace-gpu/main.c > @@ -5,6 +5,8 @@ > > #include <linux/sizes.h> > #include <linux/vfio_pci_core.h> > +#include <linux/delay.h> > +#include <linux/jiffies.h> > > /* > * The device memory usable to the workloads running in the VM is cached > @@ -25,6 +27,13 @@ > > #define GPU_CAP_DVSEC_REGISTER 3 > > +#define C2C_LINK_BAR0_OFFSET 0x1498 > +#define HBM_TRAINING_BAR0_OFFSET 0x200BC > +#define STATUS_READY 0xFF > + > +#define POLL_QUANTUM_MS 1000 > +#define POLL_TIMEOUT_MS (30 * 1000) > + > /* > * The state of the two device memory region - resmem and usemem - is > * saved as struct mem_region. > @@ -861,6 +870,65 @@ static bool nvgrace_gpu_has_mig_hw_bug(struct pci_dev *pdev) > return true; > } > > +/* > + * To reduce the system bootup time, the HBM training has > + * been moved out of the UEFI on the Grace-Blackwell systems. > + * > + * The onus of checking whether the HBM training has completed > + * thus falls on the module. The HBM training status can be > + * determined from a BAR0 register. > + * > + * Similarly, another BAR0 register exposes the status of the > + * CPU-GPU chip-to-chip (C2C) cache coherent interconnect. > + * > + * Poll these register and check for 30s. If the HBM training is > + * not complete or if the C2C link is not ready, fail the probe. > + * > + * While the wait is not required on Grace Hopper systems, it > + * is beneficial to make the check to ensure the device is in an > + * expected state. > + * > + * Ensure that the BAR0 region is enabled before accessing the > + * registers. > + */ > +static int nvgrace_gpu_wait_device_ready(struct pci_dev *pdev) > +{ > + unsigned long timeout = jiffies + msecs_to_jiffies(POLL_TIMEOUT_MS); > + void __iomem *io; > + int ret = -ETIME; > + > + ret = pci_enable_device(pdev); > + if (ret) > + return ret; > + > + ret = pci_request_selected_regions(pdev, 1 << 0, "vfio-pci"); All the overhead of enabling the device and requesting the region, only to undo it around this simple test is unfortunate, but I think correct. Even though this is only briefly taken, I'd suggest using KBUILD_MODNAME there rather than "vfio-pci" to differentiate from the core code. Thanks, Alex > + if (ret) > + goto request_region_exit; > + > + io = pci_iomap(pdev, 0, 0); > + if (!io) { > + ret = -ENOMEM; > + goto iomap_exit; > + } > + > + do { > + if ((ioread32(io + C2C_LINK_BAR0_OFFSET) == STATUS_READY) && > + (ioread32(io + HBM_TRAINING_BAR0_OFFSET) == STATUS_READY)) { > + ret = 0; > + goto reg_check_exit; > + } > + msleep(POLL_QUANTUM_MS); > + } while (!time_after(jiffies, timeout)); > + > +reg_check_exit: > + pci_iounmap(pdev, io); > +iomap_exit: > + pci_release_selected_regions(pdev, 1 << 0); > +request_region_exit: > + pci_disable_device(pdev); > + return ret; > +} > + > static int nvgrace_gpu_probe(struct pci_dev *pdev, > const struct pci_device_id *id) > { > @@ -869,6 +937,10 @@ static int nvgrace_gpu_probe(struct pci_dev *pdev, > u64 memphys, memlength; > int ret; > > + ret = nvgrace_gpu_wait_device_ready(pdev); > + if (ret) > + return ret; > + > ret = nvgrace_gpu_fetch_memory_property(pdev, &memphys, &memlength); > if (!ret) > ops = &nvgrace_gpu_pci_ops;
>> +static int nvgrace_gpu_wait_device_ready(struct pci_dev *pdev) >> +{ >> + unsigned long timeout = jiffies + msecs_to_jiffies(POLL_TIMEOUT_MS); >> + void __iomem *io; >> + int ret = -ETIME; >> + >> + ret = pci_enable_device(pdev); >> + if (ret) >> + return ret; >> + >> + ret = pci_request_selected_regions(pdev, 1 << 0, "vfio-pci"); > > All the overhead of enabling the device and requesting the region, only > to undo it around this simple test is unfortunate, but I think correct. Yeah, thanks for guiding through that. > Even though this is only briefly taken, I'd suggest using KBUILD_MODNAME > there rather than "vfio-pci" to differentiate from the core code. > Thanks, > > Alex Understood, will make the change.
diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c index f4f23c0c95c7..fc480ea32c11 100644 --- a/drivers/vfio/pci/nvgrace-gpu/main.c +++ b/drivers/vfio/pci/nvgrace-gpu/main.c @@ -5,6 +5,8 @@ #include <linux/sizes.h> #include <linux/vfio_pci_core.h> +#include <linux/delay.h> +#include <linux/jiffies.h> /* * The device memory usable to the workloads running in the VM is cached @@ -25,6 +27,13 @@ #define GPU_CAP_DVSEC_REGISTER 3 +#define C2C_LINK_BAR0_OFFSET 0x1498 +#define HBM_TRAINING_BAR0_OFFSET 0x200BC +#define STATUS_READY 0xFF + +#define POLL_QUANTUM_MS 1000 +#define POLL_TIMEOUT_MS (30 * 1000) + /* * The state of the two device memory region - resmem and usemem - is * saved as struct mem_region. @@ -861,6 +870,65 @@ static bool nvgrace_gpu_has_mig_hw_bug(struct pci_dev *pdev) return true; } +/* + * To reduce the system bootup time, the HBM training has + * been moved out of the UEFI on the Grace-Blackwell systems. + * + * The onus of checking whether the HBM training has completed + * thus falls on the module. The HBM training status can be + * determined from a BAR0 register. + * + * Similarly, another BAR0 register exposes the status of the + * CPU-GPU chip-to-chip (C2C) cache coherent interconnect. + * + * Poll these register and check for 30s. If the HBM training is + * not complete or if the C2C link is not ready, fail the probe. + * + * While the wait is not required on Grace Hopper systems, it + * is beneficial to make the check to ensure the device is in an + * expected state. + * + * Ensure that the BAR0 region is enabled before accessing the + * registers. + */ +static int nvgrace_gpu_wait_device_ready(struct pci_dev *pdev) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(POLL_TIMEOUT_MS); + void __iomem *io; + int ret = -ETIME; + + ret = pci_enable_device(pdev); + if (ret) + return ret; + + ret = pci_request_selected_regions(pdev, 1 << 0, "vfio-pci"); + if (ret) + goto request_region_exit; + + io = pci_iomap(pdev, 0, 0); + if (!io) { + ret = -ENOMEM; + goto iomap_exit; + } + + do { + if ((ioread32(io + C2C_LINK_BAR0_OFFSET) == STATUS_READY) && + (ioread32(io + HBM_TRAINING_BAR0_OFFSET) == STATUS_READY)) { + ret = 0; + goto reg_check_exit; + } + msleep(POLL_QUANTUM_MS); + } while (!time_after(jiffies, timeout)); + +reg_check_exit: + pci_iounmap(pdev, io); +iomap_exit: + pci_release_selected_regions(pdev, 1 << 0); +request_region_exit: + pci_disable_device(pdev); + return ret; +} + static int nvgrace_gpu_probe(struct pci_dev *pdev, const struct pci_device_id *id) { @@ -869,6 +937,10 @@ static int nvgrace_gpu_probe(struct pci_dev *pdev, u64 memphys, memlength; int ret; + ret = nvgrace_gpu_wait_device_ready(pdev); + if (ret) + return ret; + ret = nvgrace_gpu_fetch_memory_property(pdev, &memphys, &memlength); if (!ret) ops = &nvgrace_gpu_pci_ops;