Message ID | 20230724110406.107212-7-yi.l.liu@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | iommufd: Add nesting infrastructure | expand |
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Monday, July 24, 2023 7:04 PM > } > > rc = iopt_table_enforce_dev_resv_regions(&hwpt->ioas->iopt, idev- > >dev, > - &idev->igroup- > >sw_msi_start); > + &idev->igroup- > >sw_msi_start, > + !!hwpt->parent); > if (rc) > goto err_unlock; I prefer to not setting parent ioas to hwpt in iommufd_hw_pagetable_alloc(). then here ioas can be retrieved from hwpt->parent and then it'd be pretty clear that in nested case the sw_msi reservation happens in the parent instead of pretending the stage-1 hwpt has an ioas too.
On Fri, Jul 28, 2023 at 10:02:36AM +0000, Tian, Kevin wrote: > > From: Liu, Yi L <yi.l.liu@intel.com> > > Sent: Monday, July 24, 2023 7:04 PM > > } > > > > rc = iopt_table_enforce_dev_resv_regions(&hwpt->ioas->iopt, idev- > > >dev, > > - &idev->igroup- > > >sw_msi_start); > > + &idev->igroup- > > >sw_msi_start, > > + !!hwpt->parent); > > if (rc) > > goto err_unlock; > > I prefer to not setting parent ioas to hwpt in iommufd_hw_pagetable_alloc(). Yes, the prior patch didn't add it to the iopt, so it shouldn't have an ioas set. The NESTED domains don't have an IOAS almost by definition. > then here ioas can be retrieved from hwpt->parent and then it'd be pretty > clear that in nested case the sw_msi reservation happens in the parent > instead of pretending the stage-1 hwpt has an ioas too. Yeah, I'm confused by this patch as well. Since there should be no IOAS for the NESTED why are we messing with resv_regions? Jason
diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c index 97e4e5f5aca0..c0917683097f 100644 --- a/drivers/iommu/iommufd/device.c +++ b/drivers/iommu/iommufd/device.c @@ -347,7 +347,8 @@ int iommufd_hw_pagetable_attach(struct iommufd_hw_pagetable *hwpt, } rc = iopt_table_enforce_dev_resv_regions(&hwpt->ioas->iopt, idev->dev, - &idev->igroup->sw_msi_start); + &idev->igroup->sw_msi_start, + !!hwpt->parent); if (rc) goto err_unlock; @@ -445,7 +446,8 @@ iommufd_device_do_replace(struct iommufd_device *idev, if (hwpt->ioas != old_hwpt->ioas) { list_for_each_entry(cur, &igroup->device_list, group_item) { rc = iopt_table_enforce_dev_resv_regions( - &hwpt->ioas->iopt, cur->dev, NULL); + &hwpt->ioas->iopt, cur->dev, NULL, + !!hwpt->parent); if (rc) goto err_unresv; } diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c index 4d095115c2d0..e658261f44ed 100644 --- a/drivers/iommu/iommufd/io_pagetable.c +++ b/drivers/iommu/iommufd/io_pagetable.c @@ -1172,7 +1172,8 @@ void iopt_remove_access(struct io_pagetable *iopt, /* Narrow the valid_iova_itree to include reserved ranges from a device. */ int iopt_table_enforce_dev_resv_regions(struct io_pagetable *iopt, struct device *dev, - phys_addr_t *sw_msi_start) + phys_addr_t *sw_msi_start, + bool sw_msi_only) { struct iommu_resv_region *resv; LIST_HEAD(resv_regions); @@ -1198,6 +1199,8 @@ int iopt_table_enforce_dev_resv_regions(struct io_pagetable *iopt, num_sw_msi++; } + if (sw_msi_only && resv->type != IOMMU_RESV_SW_MSI) + continue; rc = iopt_reserve_iova(iopt, resv->start, resv->length - 1 + resv->start, dev); if (rc) diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index 90dcf4041530..268ae0d5ae12 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -79,7 +79,8 @@ void iopt_table_remove_domain(struct io_pagetable *iopt, struct iommu_domain *domain); int iopt_table_enforce_dev_resv_regions(struct io_pagetable *iopt, struct device *dev, - phys_addr_t *sw_msi_start); + phys_addr_t *sw_msi_start, + bool sw_msi_only); int iopt_set_allow_iova(struct io_pagetable *iopt, struct rb_root_cached *allowed_iova); int iopt_reserve_iova(struct io_pagetable *iopt, unsigned long start,