Message ID | da4827fda833e69dbe487ef404a9333c51d8ed2e.1733398913.git.leon@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Provide a new two step DMA mapping API | expand |
On Thu, Dec 05, 2024 at 03:21:03PM +0200, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@nvidia.com> > > Add kernel-doc section for iommu_unmap and iommu_unmap_fast to document > existing limitation of underlying functions which can't split individual > ranges. > > Suggested-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com> > --- > drivers/iommu/iommu.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index ec75d14497bf..9eb7c7d7aa70 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -2590,6 +2590,24 @@ size_t iommu_unmap(struct iommu_domain *domain, > } > EXPORT_SYMBOL_GPL(iommu_unmap); > > +/** > + * iommu_unmap_fast() - Remove mappings from a range of IOVA without IOTLB sync > + * @domain: Domain to manipulate > + * @iova: IO virtual address to start > + * @size: Length of the range starting from @iova > + * @iotlb_gather: range information for a pending IOTLB flush > + * > + * iommu_unmap_fast() will remove a translation created by iommu_map(). It cannot > + * subdivide a mapping created by iommu_map(), so it should be called with IOVA > + * ranges that match what was passed to iommu_map(). The range can aggregate > + * contiguous iommu_map() calls so long as no individual range is split. > + * > + * Basicly iommu_unmap_fast() as the same as iommu_unmap() but for callers Typo: s/Basicly/Basically/ Typo: s/as the same/is the same/ > + * which manage IOTLB flush range externaly to perform batched sync. Grammar: s/manage IOTLB flush range/manage the IOTLB flushing/ Typo: s/externaly/externally/ Grammar: s/to perform batched sync/to perform a batched sync/ With those: Acked-by: Will Deacon <will@kernel.org> Thank you for doing this! Will
On Thu, Dec 05, 2024 at 03:21:03PM +0200, Leon Romanovsky wrote: > +/** > + * iommu_unmap_fast() - Remove mappings from a range of IOVA without IOTLB sync > + * @domain: Domain to manipulate > + * @iova: IO virtual address to start > + * @size: Length of the range starting from @iova > + * @iotlb_gather: range information for a pending IOTLB flush > + * > + * iommu_unmap_fast() will remove a translation created by iommu_map(). It cannot Please avoid the overly long line here. Otherwise looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index ec75d14497bf..9eb7c7d7aa70 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2590,6 +2590,24 @@ size_t iommu_unmap(struct iommu_domain *domain, } EXPORT_SYMBOL_GPL(iommu_unmap); +/** + * iommu_unmap_fast() - Remove mappings from a range of IOVA without IOTLB sync + * @domain: Domain to manipulate + * @iova: IO virtual address to start + * @size: Length of the range starting from @iova + * @iotlb_gather: range information for a pending IOTLB flush + * + * iommu_unmap_fast() will remove a translation created by iommu_map(). It cannot + * subdivide a mapping created by iommu_map(), so it should be called with IOVA + * ranges that match what was passed to iommu_map(). The range can aggregate + * contiguous iommu_map() calls so long as no individual range is split. + * + * Basicly iommu_unmap_fast() as the same as iommu_unmap() but for callers + * which manage IOTLB flush range externaly to perform batched sync. + * + * Returns: Number of bytes of IOVA unmapped. iova + res will be the point + * unmapping stopped. + */ size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather)