From patchwork Wed May 24 17:19:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1F29C77B73 for ; Wed, 24 May 2023 17:20:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 833DD900010; Wed, 24 May 2023 13:20:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E4B1900002; Wed, 24 May 2023 13:20:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6ABD3900010; Wed, 24 May 2023 13:20:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5BF4B900002 for ; Wed, 24 May 2023 13:20:07 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 26C4F80B2C for ; Wed, 24 May 2023 17:20:06 +0000 (UTC) X-FDA: 80825811654.28.FF61F64 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id 132E8100002 for ; Wed, 24 May 2023 17:20:04 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684948805; a=rsa-sha256; cv=none; b=Sk/2HiEph5s6NwvnumqkPPX0VL4VH4m5VGL0IBiWMpzLLfvytSh7xFopcmun9T++J09RTi uQwqP0NfHK2gbSlmwsh4l24bJrQzquGdCznFP66UY8p5PXHT1eDx0jIZavvXu93UVqCxaL isAZYWt1+CTlobQli5ROEaZb2IuP1NY= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684948805; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KKonPp9ec0Dwumt9xX2iBUJbTxrdMfCA1fM7B0ZFm7k=; b=GcXjifo9pcqToWx3ZQ8EAOrpp5quh570MY0hiwB6r19QTfG2PzIQOPo59PEeqSfnJCgLjs LUXA+Xkawv0ObwUUND3Zl2CVH16PqjaqF9VNyyrVIQD2j7BEEDDnvgg6cLErUKxCz0zpwi udr9l/FAMn6rUYsR3Mi7JCoBBytj0oE= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2580B63FA4; Wed, 24 May 2023 17:20:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACB15C433B0; Wed, 24 May 2023 17:19:59 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Date: Wed, 24 May 2023 18:19:02 +0100 Message-Id: <20230524171904.3967031-14-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 132E8100002 X-Stat-Signature: zxwyu3o98jtf7r4qixebkkw4r6tcpepo X-HE-Tag: 1684948804-988601 X-HE-Meta: U2FsdGVkX19zvRIoFQUJEqa6T7uBQK24RlJc9yuH4ZmWooE0eUaSP+d9h7Ebmv5azN8DpLw1vG21/2yc8cT01/e40vvAPuGEyT65PeNnwzLT6oqTic8DpfoFSxV6reYUr+79zLAC7Yy4myj7yMdGelS+cPmK0KuNYbKnDJ1aucT4K/DOGNztUStcYFpNaDvbx2QPSiG17f+Z99Z+sYU4MPCt/ijXJKC/GaKqF72SPF/BnA01c19d+5cnWB1LFEK0JjNwYoDU4CkyCeoU97TMCSUytP88q1h4mGz6+DFPwbAgIvdga6j8/hpTBu5Pv/UT1+H+BqXJJIILFAwOak9sHbdc+x48+twNMFTsULp0JaivVmst/KF7AfpRrS5unlDoRstxzuz+AbgWzZBQTeuodnWvFYTBG9aQcagvEpdxIlj0beSu6E0kwW/WkZiAlGOlitGYksYAjhlz/Bl/KimvHH4iC7EltApYARO0gNCtHXjlgdJJ0gPn+wFjJWX0QDm7+e2WuUVlmkjcvWE2tqjTZ8WygYaTGd8uF4FLKm1LroRemw69eq2FXd37zLLzOJUWW6og1vSJnmeAQ/G/wN1szymYMu6W0bm/ryWpQa3Wg4YND34rKwuQB4OBaxv0X3emXz43C/JU6bvdJUXclxc6C2oONzWEENqHnjyd5Y17KG0kDugTOXPMPQj7qN/XisbfHFSv+GYDPBhMoqwlIjKFOVomf4FpOEyDtX8zMPSrcVBLeOY+ctThl517wW/WsOI2kMXnh9sZ/n3JO94hiHiDrEweGwcZGiW5nrPhfBep0Kd1XFSueduBQzkIR16bYqjwCBtzyUEMyBSYPezN5m6wYrlUcvenDsfeU8O1ds+OQ/0IkVPldD+98xXnZ/nD/IsaUDGqeE7ojKC/y5Dbc3RISlNak7iJOAH6OnSHSHCvFNVMN/VT8NLiibTnHmR92Ouko3VaEfTYjcVo/qVEoF0 OI2L/Ow+ bsB3uZRsjwhUJshkGPaZkkJkRfBTuHtOp5qOR4L1fUOVzVNdA/7yiaJ6VCOVj1EnakGcKU3UIxUGlPFH0hMF0htywILN4uJ9a6eXuM4Q7ZR/xWAEfHzcpaMPLya+I9qleYCShfiB5WfSqlQfTgx5cfWxZw7weafolUaZFX2NaSLgDsMtyde0jmE+mcSHL6X5Jb8ewvKD3o1+RYybpfCPCOpbcJvJpbh5n0H2LbnIYie5JmSnisMmkotDXGcdrdofjVM6EL/cm2ZyLJSMUiS8Ky1iiL21REQ6WOqOhM8Eq9bLhqubWxBADe6AABnd7a7Dktq31otKNu8LprqZMy11+5ZZpI6ukJw571hAD6KSgJ974SHJq1uDpD3Nxe5AuVETCNmEI3RbYRDADzCI6GVhY1X4ljoAzU48NHeCHw7wW5zOrPCDpVVkUf3873l0kV58vyeiKzaKRBDhDzeSq03e+PGCc6YvGO5ExxmfVWR3aoJM2JzgOj4iwYVzMZVBGEBtOysIuA0AZSLKZ3OlehiNl2wA4Dw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similarly to the direct DMA, bounce small allocations as they may have originated from a kmalloc() cache not safe for DMA. Unlike the direct DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all non-coherent devices as this would break some cases where the iova is expected to be contiguous (dmabuf). Instead, scan the scatterlist for any small sizes and only go the swiotlb path if any element of the list needs bouncing (note that iommu_dma_map_page() would still only bounce those buffers which are not DMA-aligned). To avoid scanning the scatterlist on the 'sync' operations, introduce an SG_DMA_USE_SWIOTLB flag set by iommu_dma_map_sg_swiotlb(). The dev_use_swiotlb() function together with the newly added dev_use_sg_swiotlb() now check for both untrusted devices and unaligned kmalloc() buffers (suggested by Robin Murphy). Signed-off-by: Catalin Marinas Cc: Joerg Roedel Cc: Christoph Hellwig Cc: Robin Murphy Reviewed-by: Robin Murphy --- drivers/iommu/Kconfig | 1 + drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++------- include/linux/scatterlist.h | 25 +++++++++++++++++-- 3 files changed, 65 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index db98c3f86e8c..670eff7a8e11 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -152,6 +152,7 @@ config IOMMU_DMA select IOMMU_IOVA select IRQ_MSI_IOMMU select NEED_SG_DMA_LENGTH + select NEED_SG_DMA_FLAGS if SWIOTLB # Shared Virtual Addressing config IOMMU_SVA diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9f0b0bddbd..24a8b8c2368c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -520,9 +520,38 @@ static bool dev_is_untrusted(struct device *dev) return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; } -static bool dev_use_swiotlb(struct device *dev) +static bool dev_use_swiotlb(struct device *dev, size_t size, + enum dma_data_direction dir) { - return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); + return IS_ENABLED(CONFIG_SWIOTLB) && + (dev_is_untrusted(dev) || + dma_kmalloc_needs_bounce(dev, size, dir)); +} + +static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir) +{ + struct scatterlist *s; + int i; + + if (!IS_ENABLED(CONFIG_SWIOTLB)) + return false; + + if (dev_is_untrusted(dev)) + return true; + + /* + * If kmalloc() buffers are not DMA-safe for this device and + * direction, check the individual lengths in the sg list. If any + * element is deemed unsafe, use the swiotlb for bouncing. + */ + if (!dma_kmalloc_safe(dev, dir)) { + for_each_sg(sg, s, nents, i) + if (!dma_kmalloc_size_aligned(s->length)) + return true; + } + + return false; } /** @@ -922,7 +951,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -938,7 +967,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -956,7 +985,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg; int i; - if (dev_use_swiotlb(dev)) + if (sg_is_dma_use_swiotlb(sgl)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), sg->length, dir); @@ -972,7 +1001,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg; int i; - if (dev_use_swiotlb(dev)) + if (sg_is_dma_use_swiotlb(sgl)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_device(dev, sg_dma_address(sg), @@ -998,7 +1027,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, * If both the physical buffer start address and size are * page aligned, we don't need to use a bounce page. */ - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { + if (dev_use_swiotlb(dev, size, dir) && + iova_offset(iovad, phys | size)) { void *padding_start; size_t padding_size, aligned_size; @@ -1166,6 +1196,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, struct scatterlist *s; int i; + sg_dma_mark_use_swiotlb(sg); + for_each_sg(sg, s, nents, i) { sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s), s->offset, s->length, dir, attrs); @@ -1210,7 +1242,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, goto out; } - if (dev_use_swiotlb(dev)) + if (dev_use_sg_swiotlb(dev, sg, nents, dir)) return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) @@ -1315,7 +1347,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, struct scatterlist *tmp; int i; - if (dev_use_swiotlb(dev)) { + if (sg_is_dma_use_swiotlb(sg)) { iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); return; } diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 87aaf8b5cdb4..330a157c5501 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -248,6 +248,29 @@ static inline void sg_unmark_end(struct scatterlist *sg) sg->page_link &= ~SG_END; } +#define SG_DMA_BUS_ADDRESS (1 << 0) +#define SG_DMA_USE_SWIOTLB (1 << 1) + +#ifdef CONFIG_SWIOTLB +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) +{ + return sg->dma_flags & SG_DMA_USE_SWIOTLB; +} + +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) +{ + sg->dma_flags |= SG_DMA_USE_SWIOTLB; +} +#else +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) +{ + return false; +} +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) +{ +} +#endif + /* * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). @@ -256,8 +279,6 @@ static inline void sg_unmark_end(struct scatterlist *sg) */ #ifdef CONFIG_PCI_P2PDMA -#define SG_DMA_BUS_ADDRESS (1 << 0) - /** * sg_dma_is_bus address - Return whether a given segment was marked * as a bus address