Message ID | 7-v1-3c8177327a47+256-iommu_group_locking_jgg@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Refine the locking for dev->iommu_group | expand |
> From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Wednesday, July 19, 2023 3:06 AM > > -static struct iommu_group *ipmmu_find_group(struct device *dev) > -{ > - struct ipmmu_vmsa_device *mmu = to_ipmmu(dev); > - struct iommu_group *group; > - > - if (mmu->group) > - return iommu_group_ref_get(mmu->group); > - > - group = iommu_group_alloc(); > - if (!IS_ERR(group)) > - mmu->group = group; > - > - return group; > -} Looks existing code has a leak w/o freeing the group in ipmmu_remove(). While at it could you add a patch to fix it allowing backport and then this patch to convert to the generic helper?
On Fri, Jul 21, 2023 at 07:20:21AM +0000, Tian, Kevin wrote: > > From: Jason Gunthorpe <jgg@nvidia.com> > > Sent: Wednesday, July 19, 2023 3:06 AM > > > > -static struct iommu_group *ipmmu_find_group(struct device *dev) > > -{ > > - struct ipmmu_vmsa_device *mmu = to_ipmmu(dev); > > - struct iommu_group *group; > > - > > - if (mmu->group) > > - return iommu_group_ref_get(mmu->group); > > - > > - group = iommu_group_alloc(); > > - if (!IS_ERR(group)) > > - mmu->group = group; > > - > > - return group; > > -} > > Looks existing code has a leak w/o freeing the group in ipmmu_remove(). IIRC many of the drivers had this, or other bugs here. > While at it could you add a patch to fix it allowing backport and then this > patch to convert to the generic helper? It isn't worth it, these drivers are never removed, so it is a bug that nobody will ever hit. Jason
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 9f64c5c9f5b90a..55b9b29221461c 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -63,7 +63,6 @@ struct ipmmu_vmsa_device { struct ipmmu_vmsa_domain *domains[IPMMU_CTX_MAX]; s8 utlb_ctx[IPMMU_UTLB_MAX]; - struct iommu_group *group; struct dma_iommu_mapping *mapping; }; @@ -832,28 +831,17 @@ static void ipmmu_release_device(struct device *dev) arm_iommu_release_mapping(mmu->mapping); } -static struct iommu_group *ipmmu_find_group(struct device *dev) -{ - struct ipmmu_vmsa_device *mmu = to_ipmmu(dev); - struct iommu_group *group; - - if (mmu->group) - return iommu_group_ref_get(mmu->group); - - group = iommu_group_alloc(); - if (!IS_ERR(group)) - mmu->group = group; - - return group; -} - static const struct iommu_ops ipmmu_ops = { .domain_alloc = ipmmu_domain_alloc, .probe_device = ipmmu_probe_device, .release_device = ipmmu_release_device, .probe_finalize = ipmmu_probe_finalize, + /* + * FIXME: The device grouping is a fixed property of the hardware's + * ability to isolate and control DMA, it should not depend on kconfig. + */ .device_group = IS_ENABLED(CONFIG_ARM) && !IS_ENABLED(CONFIG_IOMMU_DMA) - ? generic_device_group : ipmmu_find_group, + ? generic_device_group : generic_single_device_group, .pgsize_bitmap = SZ_1G | SZ_2M | SZ_4K, .of_xlate = ipmmu_of_xlate, .default_domain_ops = &(const struct iommu_domain_ops) {
Use the new helper. This driver is kind of weird since in ARM mode it pretends it has per-device groups, but ARM64 mode does not. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> --- drivers/iommu/ipmmu-vmsa.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-)