From patchwork Fri Jan 17 10:03:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13943138 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CFD41FF1D6; Fri, 17 Jan 2025 10:04:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737108258; cv=none; b=hGXpO0S8na6FU0ogAGc7xmKPpXvdDyx47CBE3atjill90o4aU5wInCJ25UIO9gaTA1XX2PGStfwmFRXUNyV/apkzafmrdUMWQDeGth4aZ3eVydrZVSr3FteurMrHq+I+VfYdihhm3g/Sx5fNaQdLZvZc026X7dHuu/GjOoQQtL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737108258; c=relaxed/simple; bh=rum8Whg042xtqAAmYnLBImAToSE6OI+VPKJj0K+Pohc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Pl6HdZHWO7ehsxUCnPlhIa5eKwJyHleNUWjuNXi1+vBU6Pq5wcQD/hi7z3HnoBawT+AVCxnTp5vOz0iwaCvIogiK6Dpcql6xmjD39zLQ7mJZJbRs9iZBuh4HboAR7Idcp4b71Fe6fmrMRDsNoYpym89Vb0CBJ6S1ZpAwU1Sin9A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Mc8CLwKz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Mc8CLwKz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1AC38C4CEDD; Fri, 17 Jan 2025 10:04:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737108257; bh=rum8Whg042xtqAAmYnLBImAToSE6OI+VPKJj0K+Pohc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mc8CLwKz7mcdjOCnAG6ibIY8fcYKhCcOq2PmhjxuMXrgUUBs6Vg+fAM3jon/hdaY3 El7Bqjx4eTk7ePsxyy1gVt7SPj8RPnH0fKGXTOTj8ZDrW9ypTPHtDuOn8XKBqedSfg YuIvn3AO+94EpLsXUhTmaiVjImD7VM2FgNl4K8edzDIkwhQsihPCDrlvDqPMD5XUA9 +hsYCA86uB9WKSypdcSPK0LPXU9jv7CUgSfvLM5JVEelfKbMZr+RbNQJG7BNhfMpv+ Ub3w7UF+6N3O2/R0+ZoAV4boOKGwAZ7IFEVW3ZF/mb7I4QgKF+L0vD4K6LabcgFGC8 4jwmdTjQfSEMw== From: Leon Romanovsky To: Christoph Hellwig , Jason Gunthorpe , Robin Murphy Cc: Jens Axboe , Joerg Roedel , Will Deacon , Sagi Grimberg , "Keith Busch" , "Bjorn Helgaas" , "Logan Gunthorpe" , "Yishai Hadas" , "Shameer Kolothum" , "Kevin Tian" , "Alex Williamson" , "Marek Szyprowski" , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , "Andrew Morton" , "Jonathan Corbet" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, "Randy Dunlap" Subject: [PATCH v6 06/17] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Date: Fri, 17 Jan 2025 12:03:37 +0200 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Christoph Hellwig Split the iommu logic from iommu_dma_map_page into a separate helper. This not only keeps the code neatly separated, but will also allow for reuse in another caller. Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 309d278b1d86..80cc2c51ac99 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1161,6 +1161,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + + if (!is_swiotlb_active(dev)) { + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); + return DMA_MAPPING_ERROR; + } + + trace_swiotlb_bounced(dev, phys, size); + + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, + attrs); + + /* + * Untrusted devices should not see padding areas with random leftover + * kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to + * the contents of the original memory buffer. + */ + if (phys != DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); + + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); + + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, iova_align(iovad, start) - start); + } + + return phys; +} + dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1174,42 +1211,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, dma_addr_t iova, dma_mask = dma_get_mask(dev); /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. + * If both the physical buffer start address and size are page aligned, + * we don't need to use a bounce page. */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - if (!is_swiotlb_active(dev)) { - dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); - return DMA_MAPPING_ERROR; - } - - trace_swiotlb_bounced(dev, phys, size); - - phys = swiotlb_tbl_map_single(dev, phys, size, - iova_mask(iovad), dir, attrs); - + phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys == DMA_MAPPING_ERROR) - return DMA_MAPPING_ERROR; - - /* - * Untrusted devices should not see padding areas with random - * leftover kernel data, so zero the pre- and post-padding. - * swiotlb_tbl_map_single() has initialized the bounce buffer - * proper to the contents of the original memory buffer. - */ - if (dev_is_untrusted(dev)) { - size_t start, virt = (size_t)phys_to_virt(phys); - - /* Pre-padding */ - start = iova_align_down(iovad, virt); - memset((void *)start, 0, virt - start); - - /* Post-padding */ - start = virt + size; - memset((void *)start, 0, - iova_align(iovad, start) - start); - } + return phys; } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))