From patchwork Tue Mar 5 11:18:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A1EAC54E41 for ; Tue, 5 Mar 2024 11:19:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A1AB6B009D; Tue, 5 Mar 2024 06:19:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FFCB6B00C2; Tue, 5 Mar 2024 06:19:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DE636B00C4; Tue, 5 Mar 2024 06:19:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 552B46B009D for ; Tue, 5 Mar 2024 06:19:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 253871C0F10 for ; Tue, 5 Mar 2024 11:19:17 +0000 (UTC) X-FDA: 81862739154.09.4CDAC0B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf12.hostedemail.com (Postfix) with ESMTP id 8F00E4001C for ; Tue, 5 Mar 2024 11:19:15 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UtR+RVnD; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709637555; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PWIkDp3wWF79wb05Gr2TTYEnApxvGXoKXkM2oYrDtQY=; b=M9RMcJNFtDTc7JhfOvHmUjmi8hyeyuhnhOmzYKUC25iQg6RPVX4ENemrZtJOHcSIO+MPSk a4+PjmkCM41Hh7IMsGWrsxolsFYo3BUc44rMbfyDTMDmDRkqLLa3pp046oW9j4Ya1+parL 1oUWS0MWuPzPhYcD01JQaIHzsckxq08= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709637555; a=rsa-sha256; cv=none; b=72PPUO1aGowwWKvjID+84hHCC89LGlOeChGhl5V5DC9rAmKPWF7r2kuC0MZvowNsSTS8VF 0XacPLR1EbwAk3cBQKhbIt2p6XJhyZdIehpgYR3JoCB35ysDGLmuESjOh+e8MB8TNDXHfh 8ZZfKq591LjATEiGaF2xFI1ZJJPtBr0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UtR+RVnD; spf=pass (imf12.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id D41FD6138B; Tue, 5 Mar 2024 11:19:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF30FC43399; Tue, 5 Mar 2024 11:19:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709637554; bh=fDITfGtiOJR6jcG42lBCR+mHX3ThOxLIN0l1OlLSZWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UtR+RVnDuKHTmkad8An+k+08jfF/EbhAR2Ufm0sY+g7oiOpnM3KqMyBeFha9FWL7u b2boG4rlmSWII7lfBNUMjOhEuaYoY23cUWa/HkOBe/t6ai8nTS3MPFMk2LN3IkjZ39 BMLsiBs1EKWHs80tDc5Rxm7i3whu9DKChJtmQbKiYp5rHil6wGXNsTaJJ7p90P7Z28 VxkSTyod4a91q2BlUn5JwOCrTAHYYFefrP6ABgy8IV3ATLOOz7Lpps3mK9vbjNmWWb ve/h7tSzs2BOjn9kxTPFrDD5D+Rg1KdtwI3dCfQbpH++v7TFOZIIYca6Gx4SZqTQtc DbdmkCjQVpg1g== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC RESEND 05/16] iommu/dma: Prepare map/unmap page functions to receive IOVA Date: Tue, 5 Mar 2024 13:18:36 +0200 Message-ID: <13187a8682ab4f8708ca88cc4363f90e64e14ccc.1709635535.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 8F00E4001C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: uyb7betr33e1j73ss7as86d4k6nx6rya X-HE-Tag: 1709637555-686035 X-HE-Meta: U2FsdGVkX19Y6scKvuKtossjAH1FxBqhMDhhaHiA1B82fE0MSVNGE7ugCSsvYQP8H1D2apJ5btNIjJg1feA/BDykv53CGTNayMMAPiwYq767C1pUms8+Ge+0LhGsFdjKnQLAS6EnDwyPGS0xhNa4/Dg1n4PV35h1rLX50D1cNIlvHmXNOSiHgVXCiMYJLZ19hCKF89noggcOfqR09UyJiosdPjmlpQr5BxAHSdd+x0u+qP8f00ZbIptcvoWckI2AUTIaBljLafELp2SoZqxC3yGSYwSOUOb/xXrsxweN6OYmlmnknReWa0WVA8m1xU8GPW47RDSUCuMrpGaFYboTh98/5L2ngSQyq+GrXL/tWU6VWdrU7SOYqhBeFday1SjuGpVWfw+djE6O/jIOciYyk57QV65mANdNPL+TuvWTzq93Qs6BHcguCV4kG6DPmCuZCef7pg8uSRhhL5DUvCXwhZefumZlBPA6ruQm1k8drigXRiSeYoKf4YgNMF1pGU2yHI0f6ymmxIVLlsa56vuW3pRUsmCtVULdx/nTX3K2tyq1IdVD6LXBxalTD3mu8DPCMATvdFPkLYxTeKm25ELXR+Lniz+HVznX3FjGEuJOTU6xzbPWOWTFv5gYc6VEerA/ecfgMQqgAHVNNaJw53PMErGDsDFyGvVz/a8kZCU7g9l4+TOSNBqUbLWnnER82Hn58ZescHu8LVzqPu85dyp2R9+mvJMfu9nxt1uc43tQGv37/HGNcwsE3kruWJbRQLVx9EY8TR2t1j7XpOs0Br0W2zdDNEVkthIs+Pkbl9UMyB31GjeWqA33LaIFl63DIokfUDsLun6hNFFt2AfZAnrufj6jogyFl90ME/lLH9WEgNf7yQEMwwcQeN447D1et4Bk4i7NtPHumDEDRnq1RpUEf1ri4SutopqPpZRl6JStsn5Gl2vK+VrrwhpYT2KYneK034qXyh8ob5hPHYJb6WR BvZPCFMl 3gk1a X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Extend the existing map_page/unmap_page function implementations to get preallocated IOVA. In such case, the IOVA allocation needs to be skipped, but rest of the code stays the same. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 68 ++++++++++++++++++++++++++------------- 1 file changed, 45 insertions(+), 23 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e55726783501..dbdd373a609a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -824,7 +824,7 @@ static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, } static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, - size_t size) + size_t size, bool free_iova) { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -843,17 +843,19 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + if (free_iova) + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, u64 dma_mask) + dma_addr_t iova, size_t size, int prot, + u64 dma_mask) { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, phys); - dma_addr_t iova; + bool no_iova = !iova; if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) @@ -861,12 +863,14 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); + if (no_iova) + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - __iommu_dma_free_iova(cookie, iova, size, NULL); + if (no_iova) + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -1031,7 +1035,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return vaddr; out_unmap: - __iommu_dma_unmap(dev, *dma_handle, size); + __iommu_dma_unmap(dev, *dma_handle, size, true); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); return NULL; } @@ -1060,7 +1064,7 @@ static void iommu_dma_free_noncontiguous(struct device *dev, size_t size, { struct dma_sgt_handle *sh = sgt_handle(sgt); - __iommu_dma_unmap(dev, sgt->sgl->dma_address, size); + __iommu_dma_unmap(dev, sgt->sgl->dma_address, size, true); __iommu_dma_free_pages(sh->pages, PAGE_ALIGN(size) >> PAGE_SHIFT); sg_free_table(&sh->sgt); kfree(sh); @@ -1131,9 +1135,11 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } -static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t __iommu_dma_map_pages(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t iova, + size_t size, + enum dma_data_direction dir, + unsigned long attrs) { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); @@ -1141,7 +1147,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; - dma_addr_t iova, dma_mask = dma_get_mask(dev); + dma_addr_t addr, dma_mask = dma_get_mask(dev); /* * If both the physical buffer start address and size are @@ -1182,14 +1188,23 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); - iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) + addr = __iommu_dma_map(dev, phys, iova, size, prot, dma_mask); + if (addr == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); - return iova; + return addr; } -static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + return __iommu_dma_map_pages(dev, page, offset, 0, size, dir, attrs); +} + +static void __iommu_dma_unmap_pages(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs, bool free_iova) { struct iommu_domain *domain = iommu_get_dma_domain(dev); phys_addr_t phys; @@ -1201,12 +1216,19 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_unmap(dev, dma_handle, size, free_iova); if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } +static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + __iommu_dma_unmap_pages(dev, dma_handle, size, dir, attrs, true); +} + /* * Prepare a successfully-mapped scatterlist to give back to the caller. * @@ -1509,13 +1531,13 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, } if (end) - __iommu_dma_unmap(dev, start, end - start); + __iommu_dma_unmap(dev, start, end - start, true); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { - return __iommu_dma_map(dev, phys, size, + return __iommu_dma_map(dev, phys, 0, size, dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, dma_get_mask(dev)); } @@ -1523,7 +1545,7 @@ static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap(dev, handle, size, true); } static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) @@ -1560,7 +1582,7 @@ static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) static void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap(dev, handle, size, true); __iommu_dma_free(dev, size, cpu_addr); } @@ -1626,7 +1648,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (!cpu_addr) return NULL; - *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot, + *handle = __iommu_dma_map(dev, page_to_phys(page), 0, size, ioprot, dev->coherent_dma_mask); if (*handle == DMA_MAPPING_ERROR) { __iommu_dma_free(dev, size, cpu_addr);