Message ID | 20190614004644.20767-1-jgg@ziepe.ca (mailing list archive) |
---|---|
State | Mainlined |
Commit | dd82e668892ead6fe97c97eabd7ba28e296052c6 |
Headers | show |
Series | RDMA/odp: Do not leak dma maps when working with huge pages | expand |
On Thu, 2019-06-13 at 21:46 -0300, Jason Gunthorpe wrote: > From: Jason Gunthorpe <jgg@mellanox.com> > > The ib_dma_unmap_page() must match the length of the > ib_dma_map_page(), > which is based on odp_shift. Otherwise iommu resources under this API > will not be properly freed. > > Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Thanks, applied to for-next.
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 9001cc10770a24..bcfa5c904a4bbf 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -726,7 +726,8 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, WARN_ON(!dma_addr); - ib_dma_unmap_page(dev, dma_addr, PAGE_SIZE, + ib_dma_unmap_page(dev, dma_addr, + BIT(umem_odp->page_shift), DMA_BIDIRECTIONAL); if (dma & ODP_WRITE_ALLOWED_BIT) { struct page *head_page = compound_head(page);