From patchwork Wed Dec 13 17:17:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491669 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 272BBD0; Wed, 13 Dec 2023 09:18:26 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6C6A1516; Wed, 13 Dec 2023 09:19:11 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CC9B23F762; Wed, 13 Dec 2023 09:18:21 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org, Rob Herring Subject: [PATCH v2 1/7] OF: Retire dma-ranges mask workaround Date: Wed, 13 Dec 2023 17:17:54 +0000 Message-Id: <6b4156c5061e29bfd95fdd2f6744ee727ea7f1a2.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The fixup adding 1 to the dma-ranges size may have been for the benefit of some early AMD Seattle DTs, or may have merely been a just-in-case, but either way anyone who might have deserved to get the message has hopefully seen the warning in the 9 years we've had it there. The modern dma_range_map mechanism should happily handle odd-sized ranges with no ill effect, so there's little need to care anyway now. Clean it up. Acked-by: Rob Herring Signed-off-by: Robin Murphy --- v2: Tweak commit message --- drivers/of/device.c | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/drivers/of/device.c b/drivers/of/device.c index 1ca42ad9dd15..526a42cdf66e 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -129,22 +129,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, dma_end = r->dma_start + r->size; } size = dma_end - dma_start; - - /* - * Add a work around to treat the size as mask + 1 in case - * it is defined in DT as a mask. - */ - if (size & 1) { - dev_warn(dev, "Invalid size 0x%llx for dma-range(s)\n", - size); - size = size + 1; - } - - if (!size) { - dev_err(dev, "Adjusted size 0x%llx invalid\n", size); - kfree(map); - return -EINVAL; - } } /* From patchwork Wed Dec 13 17:17:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491670 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8B6EAAB; Wed, 13 Dec 2023 09:18:30 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E9621576; Wed, 13 Dec 2023 09:19:16 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1B3323F762; Wed, 13 Dec 2023 09:18:26 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org, Rob Herring , Jason Gunthorpe Subject: [PATCH v2 2/7] OF: Simplify DMA range calculations Date: Wed, 13 Dec 2023 17:17:55 +0000 Message-Id: <368a546e36235bd0008f66838d0c8edcd864ab94.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Juggling start, end, and size values for a range is somewhat redundant and a little hard to follow. Consolidate down to just using inclusive start and end, which saves us worrying about size overflows for full 64-bit ranges (note that passing a potentially-overflowed value through to arch_setup_dma_ops() is benign for all current implementations, and this is working towards removing that anyway). Acked-by: Rob Herring Reviewed-by: Jason Gunthorpe Signed-off-by: Robin Murphy --- drivers/of/device.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/of/device.c b/drivers/of/device.c index 526a42cdf66e..51062a831970 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -97,7 +97,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, const struct bus_dma_region *map = NULL; struct device_node *bus_np; u64 dma_start = 0; - u64 mask, end, size = 0; + u64 mask, end = 0; bool coherent; int ret; @@ -118,17 +118,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, return ret == -ENODEV ? 0 : ret; } else { const struct bus_dma_region *r = map; - u64 dma_end = 0; /* Determine the overall bounds of all DMA regions */ for (dma_start = ~0; r->size; r++) { /* Take lower and upper limits */ if (r->dma_start < dma_start) dma_start = r->dma_start; - if (r->dma_start + r->size > dma_end) - dma_end = r->dma_start + r->size; + if (r->dma_start + r->size > end) + end = r->dma_start + r->size; } - size = dma_end - dma_start; } /* @@ -142,16 +140,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, dev->dma_mask = &dev->coherent_dma_mask; } - if (!size && dev->coherent_dma_mask) - size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1); - else if (!size) - size = 1ULL << 32; + if (!end && dev->coherent_dma_mask) + end = dev->coherent_dma_mask; + else if (!end) + end = (1ULL << 32) - 1; /* * Limit coherent and dma mask based on size and default mask * set by the driver. */ - end = dma_start + size - 1; mask = DMA_BIT_MASK(ilog2(end) + 1); dev->coherent_dma_mask &= mask; *dev->dma_mask &= mask; @@ -177,7 +174,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, dev_dbg(dev, "device is%sbehind an iommu\n", iommu ? " " : " not "); - arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); + arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, iommu, coherent); if (!iommu) of_dma_set_restricted_buffer(dev, np); From patchwork Wed Dec 13 17:17:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491671 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C88FEE4; Wed, 13 Dec 2023 09:18:34 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 80A4F1595; Wed, 13 Dec 2023 09:19:20 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 773913F762; Wed, 13 Dec 2023 09:18:30 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 3/7] ACPI/IORT: Handle memory address size limits as limits Date: Wed, 13 Dec 2023 17:17:56 +0000 Message-Id: X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Return the Root Complex/Named Component memory address size limit as an inclusive limit value, rather than an exclusive size. This saves having to fudge an off-by-one for the 64-bit case, and simplifies our caller. Reviewed-by: Jason Gunthorpe Signed-off-by: Robin Murphy --- v2: Avoid undefined shifts (grr...) --- drivers/acpi/arm64/dma.c | 9 +++------ drivers/acpi/arm64/iort.c | 20 ++++++++++---------- include/linux/acpi_iort.h | 4 ++-- 3 files changed, 15 insertions(+), 18 deletions(-) diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c index 93d796531af3..b98a149f8d50 100644 --- a/drivers/acpi/arm64/dma.c +++ b/drivers/acpi/arm64/dma.c @@ -8,7 +8,6 @@ void acpi_arch_dma_setup(struct device *dev) { int ret; u64 end, mask; - u64 size = 0; const struct bus_dma_region *map = NULL; /* @@ -23,9 +22,9 @@ void acpi_arch_dma_setup(struct device *dev) } if (dev->coherent_dma_mask) - size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1); + end = dev->coherent_dma_mask; else - size = 1ULL << 32; + end = (1ULL << 32) - 1; ret = acpi_dma_get_range(dev, &map); if (!ret && map) { @@ -36,18 +35,16 @@ void acpi_arch_dma_setup(struct device *dev) end = r->dma_start + r->size - 1; } - size = end + 1; dev->dma_range_map = map; } if (ret == -ENODEV) - ret = iort_dma_get_ranges(dev, &size); + ret = iort_dma_get_ranges(dev, &end); if (!ret) { /* * Limit coherent and dma mask based on size retrieved from * firmware. */ - end = size - 1; mask = DMA_BIT_MASK(ilog2(end) + 1); dev->bus_dma_limit = end; dev->coherent_dma_mask = min(dev->coherent_dma_mask, mask); diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 6496ff5a6ba2..c0b1c2c19444 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -1367,7 +1367,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } #endif -static int nc_dma_get_range(struct device *dev, u64 *size) +static int nc_dma_get_range(struct device *dev, u64 *limit) { struct acpi_iort_node *node; struct acpi_iort_named_component *ncomp; @@ -1384,13 +1384,13 @@ static int nc_dma_get_range(struct device *dev, u64 *size) return -EINVAL; } - *size = ncomp->memory_address_limit >= 64 ? U64_MAX : - 1ULL<memory_address_limit; + *limit = ncomp->memory_address_limit >= 64 ? U64_MAX : + (1ULL << ncomp->memory_address_limit) - 1; return 0; } -static int rc_dma_get_range(struct device *dev, u64 *size) +static int rc_dma_get_range(struct device *dev, u64 *limit) { struct acpi_iort_node *node; struct acpi_iort_root_complex *rc; @@ -1408,8 +1408,8 @@ static int rc_dma_get_range(struct device *dev, u64 *size) return -EINVAL; } - *size = rc->memory_address_limit >= 64 ? U64_MAX : - 1ULL<memory_address_limit; + *limit = rc->memory_address_limit >= 64 ? U64_MAX : + (1ULL << rc->memory_address_limit) - 1; return 0; } @@ -1417,16 +1417,16 @@ static int rc_dma_get_range(struct device *dev, u64 *size) /** * iort_dma_get_ranges() - Look up DMA addressing limit for the device * @dev: device to lookup - * @size: DMA range size result pointer + * @limit: DMA limit result pointer * * Return: 0 on success, an error otherwise. */ -int iort_dma_get_ranges(struct device *dev, u64 *size) +int iort_dma_get_ranges(struct device *dev, u64 *limit) { if (dev_is_pci(dev)) - return rc_dma_get_range(dev, size); + return rc_dma_get_range(dev, limit); else - return nc_dma_get_range(dev, size); + return nc_dma_get_range(dev, limit); } static void __init acpi_iort_register_irq(int hwirq, const char *name, diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index 1cb65592c95d..d4ed5622cf2b 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -39,7 +39,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head); /* IOMMU interface */ -int iort_dma_get_ranges(struct device *dev, u64 *size); +int iort_dma_get_ranges(struct device *dev, u64 *limit); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); @@ -55,7 +55,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *hea static inline void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } /* IOMMU interface */ -static inline int iort_dma_get_ranges(struct device *dev, u64 *size) +static inline int iort_dma_get_ranges(struct device *dev, u64 *limit) { return -ENODEV; } static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } From patchwork Wed Dec 13 17:17:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491672 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2090EF2; Wed, 13 Dec 2023 09:18:39 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C3B33C15; Wed, 13 Dec 2023 09:19:24 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BA2223F762; Wed, 13 Dec 2023 09:18:34 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 4/7] dma-mapping: Add helpers for dma_range_map bounds Date: Wed, 13 Dec 2023 17:17:57 +0000 Message-Id: <16d3e9100cd4a4a397641df963f416cc7f70cc4c.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Several places want to compute the lower and/or upper bounds of a dma_range_map, so let's factor that out into reusable helpers. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Robin Murphy Acked-by: Rob Herring --- v2: fix warning for 32-bit builds --- arch/loongarch/kernel/dma.c | 9 ++------- drivers/acpi/arm64/dma.c | 8 +------- drivers/of/device.c | 11 ++--------- include/linux/dma-direct.h | 18 ++++++++++++++++++ 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/loongarch/kernel/dma.c b/arch/loongarch/kernel/dma.c index 7a9c6a9dd2d0..429555fb4e13 100644 --- a/arch/loongarch/kernel/dma.c +++ b/arch/loongarch/kernel/dma.c @@ -8,17 +8,12 @@ void acpi_arch_dma_setup(struct device *dev) { int ret; - u64 mask, end = 0; + u64 mask, end; const struct bus_dma_region *map = NULL; ret = acpi_dma_get_range(dev, &map); if (!ret && map) { - const struct bus_dma_region *r = map; - - for (end = 0; r->size; r++) { - if (r->dma_start + r->size - 1 > end) - end = r->dma_start + r->size - 1; - } + end = dma_range_map_max(map); mask = DMA_BIT_MASK(ilog2(end) + 1); dev->bus_dma_limit = end; diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c index b98a149f8d50..52b2abf88689 100644 --- a/drivers/acpi/arm64/dma.c +++ b/drivers/acpi/arm64/dma.c @@ -28,13 +28,7 @@ void acpi_arch_dma_setup(struct device *dev) ret = acpi_dma_get_range(dev, &map); if (!ret && map) { - const struct bus_dma_region *r = map; - - for (end = 0; r->size; r++) { - if (r->dma_start + r->size - 1 > end) - end = r->dma_start + r->size - 1; - } - + end = dma_range_map_max(map); dev->dma_range_map = map; } diff --git a/drivers/of/device.c b/drivers/of/device.c index 51062a831970..66879edb4a61 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -117,16 +117,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, if (!force_dma) return ret == -ENODEV ? 0 : ret; } else { - const struct bus_dma_region *r = map; - /* Determine the overall bounds of all DMA regions */ - for (dma_start = ~0; r->size; r++) { - /* Take lower and upper limits */ - if (r->dma_start < dma_start) - dma_start = r->dma_start; - if (r->dma_start + r->size > end) - end = r->dma_start + r->size; - } + dma_start = dma_range_map_min(map); + end = dma_range_map_max(map); } /* diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 3eb3589ff43e..edbe13d00776 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -54,6 +54,24 @@ static inline phys_addr_t translate_dma_to_phys(struct device *dev, return (phys_addr_t)-1; } +static inline dma_addr_t dma_range_map_min(const struct bus_dma_region *map) +{ + dma_addr_t ret = (dma_addr_t)U64_MAX; + + for (; map->size; map++) + ret = min(ret, map->dma_start); + return ret; +} + +static inline dma_addr_t dma_range_map_max(const struct bus_dma_region *map) +{ + dma_addr_t ret = 0; + + for (; map->size; map++) + ret = max(ret, map->dma_start + map->size - 1); + return ret; +} + #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA #include #ifndef phys_to_dma_unencrypted From patchwork Wed Dec 13 17:17:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491673 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 77C6E184; Wed, 13 Dec 2023 09:18:43 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11FC61516; Wed, 13 Dec 2023 09:19:29 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 07F9C3F762; Wed, 13 Dec 2023 09:18:38 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 5/7] iommu/dma: Make limit checks self-contained Date: Wed, 13 Dec 2023 17:17:58 +0000 Message-Id: <6d1aac26db65bf439f4625986ca76be520774759.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 It's now easy to retrieve the device's DMA limits if we want to check them against the domain aperture, so do that ourselves instead of relying on them being passed through the callchain. Reviewed-by: Jason Gunthorpe Signed-off-by: Robin Murphy --- drivers/iommu/dma-iommu.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5dc012220ca9..27a167f4cd3e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -659,19 +659,16 @@ static void iommu_dma_init_options(struct iommu_dma_options *options, /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() - * @base: IOVA at which the mappable address space starts - * @limit: Last address of the IOVA space * @dev: Device the domain is being initialised for * - * @base and @limit + 1 should be exact multiples of IOMMU page granularity to - * avoid rounding surprises. If necessary, we reserve the page at address 0 + * If the geometry and dma_range_map include address 0, we reserve that page * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but * any change which could make prior IOVAs invalid will fail. */ -static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, - dma_addr_t limit, struct device *dev) +static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; + const struct bus_dma_region *map = dev->dma_range_map; unsigned long order, base_pfn; struct iova_domain *iovad; int ret; @@ -683,18 +680,18 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, /* Use the smallest supported page size for IOVA granularity */ order = __ffs(domain->pgsize_bitmap); - base_pfn = max_t(unsigned long, 1, base >> order); + base_pfn = 1; /* Check the domain allows at least some access to the device... */ - if (domain->geometry.force_aperture) { + if (map) { + dma_addr_t base = dma_range_map_min(map); if (base > domain->geometry.aperture_end || - limit < domain->geometry.aperture_start) { + dma_range_map_max(map) < domain->geometry.aperture_start) { pr_warn("specified DMA range outside IOMMU capability\n"); return -EFAULT; } /* ...then finally give it a kicking to make sure it fits */ - base_pfn = max_t(unsigned long, base_pfn, - domain->geometry.aperture_start >> order); + base_pfn = max(base, domain->geometry.aperture_start) >> order; } /* start_pfn is always nonzero for an already-initialised domain */ @@ -1743,7 +1740,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit) * underlying IOMMU driver needs to support via the dma-iommu layer. */ if (iommu_is_dma_domain(domain)) { - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev)) + if (iommu_dma_init_domain(domain, dev)) goto out_err; dev->dma_ops = &iommu_dma_ops; } From patchwork Wed Dec 13 17:17:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491674 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7B19910E; Wed, 13 Dec 2023 09:18:47 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B94C1576; Wed, 13 Dec 2023 09:19:33 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A4373F762; Wed, 13 Dec 2023 09:18:43 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org Subject: [PATCH v2 6/7] iommu/dma: Centralise iommu_setup_dma_ops() Date: Wed, 13 Dec 2023 17:17:59 +0000 Message-Id: <5d89190b35720bf5b66621f46b6d3c85323d8eab.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only ever call iommu_setup_dma_ops() after a successful iommu_probe_device(), which means there should be no harm in achieving the same orer of operations by running it off the back of iommu_probe_device() itself. This then puts it in line with the x86 and s390 .probe_finalize bodges, letting us pull it all into the main flow properly. As a bonus this lets us fold in and de-scope the PCI workaround setup as well. At this point we can also then pull the call inside the group mutex, and avoid having to think about whether iommu_group_store_type() could theroetically race and free the domain where iommu_setup_dma_ops() may currently run just *before* iommu_device_use_default_domain() claims it. Signed-off-by: Robin Murphy --- v2: Shuffle around to make sure the iommu_group_do_probe_finalize() cases are covered as well, with bonus side-effects as above. --- arch/arm64/mm/dma-mapping.c | 2 -- drivers/iommu/amd/iommu.c | 8 -------- drivers/iommu/dma-iommu.c | 18 ++++++------------ drivers/iommu/dma-iommu.h | 14 ++++++-------- drivers/iommu/intel/iommu.c | 7 ------- drivers/iommu/iommu.c | 6 +++--- drivers/iommu/s390-iommu.c | 6 ------ drivers/iommu/virtio-iommu.c | 10 ---------- include/linux/iommu.h | 7 ------- 9 files changed, 15 insertions(+), 63 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 3cb101e8cb29..96ff791199e8 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -58,8 +58,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, ARCH_DMA_MINALIGN, cls); dev->dma_coherent = coherent; - if (iommu) - iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1); xen_setup_dma_ops(dev); } diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index fcc987f5d4ed..9418808009ba 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -1985,13 +1985,6 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev) return iommu_dev; } -static void amd_iommu_probe_finalize(struct device *dev) -{ - /* Domains are initialized for this device - have a look what we ended up with */ - set_dma_ops(dev, NULL); - iommu_setup_dma_ops(dev, 0, U64_MAX); -} - static void amd_iommu_release_device(struct device *dev) { struct amd_iommu *iommu; @@ -2646,7 +2639,6 @@ const struct iommu_ops amd_iommu_ops = { .domain_alloc_user = amd_iommu_domain_alloc_user, .probe_device = amd_iommu_probe_device, .release_device = amd_iommu_release_device, - .probe_finalize = amd_iommu_probe_finalize, .device_group = amd_iommu_device_group, .get_resv_regions = amd_iommu_get_resv_regions, .is_attach_deferred = amd_iommu_is_attach_deferred, diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 27a167f4cd3e..d808c8dcf5cb 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1724,25 +1724,20 @@ static const struct dma_map_ops iommu_dma_ops = { .opt_mapping_size = iommu_dma_opt_mapping_size, }; -/* - * The IOMMU core code allocates the default DMA domain, which the underlying - * IOMMU driver needs to support via the dma-iommu layer. - */ -void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit) +void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); - if (!domain) - goto out_err; + if (dev_is_pci(dev)) + dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac; - /* - * The IOMMU core code allocates the default DMA domain, which the - * underlying IOMMU driver needs to support via the dma-iommu layer. - */ if (iommu_is_dma_domain(domain)) { if (iommu_dma_init_domain(domain, dev)) goto out_err; dev->dma_ops = &iommu_dma_ops; + } else if (dev->dma_ops == &iommu_dma_ops) { + /* Clean up if we've switched *from* a DMA domain */ + dev->dma_ops = NULL; } return; @@ -1750,7 +1745,6 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit) pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n", dev_name(dev)); } -EXPORT_SYMBOL_GPL(iommu_setup_dma_ops); static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, phys_addr_t msi_addr, struct iommu_domain *domain) diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h index c829f1f82a99..c12d63457c76 100644 --- a/drivers/iommu/dma-iommu.h +++ b/drivers/iommu/dma-iommu.h @@ -9,6 +9,8 @@ #ifdef CONFIG_IOMMU_DMA +void iommu_setup_dma_ops(struct device *dev); + int iommu_get_dma_cookie(struct iommu_domain *domain); void iommu_put_dma_cookie(struct iommu_domain *domain); @@ -17,13 +19,13 @@ int iommu_dma_init_fq(struct iommu_domain *domain); void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); extern bool iommu_dma_forcedac; -static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev) -{ - dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac; -} #else /* CONFIG_IOMMU_DMA */ +static inline void iommu_setup_dma_ops(struct device *dev) +{ +} + static inline int iommu_dma_init_fq(struct iommu_domain *domain) { return -EINVAL; @@ -42,9 +44,5 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he { } -static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev) -{ -} - #endif /* CONFIG_IOMMU_DMA */ #endif /* __DMA_IOMMU_H */ diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 3531b956556c..f7347ed41e89 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4480,12 +4480,6 @@ static void intel_iommu_release_device(struct device *dev) set_dma_ops(dev, NULL); } -static void intel_iommu_probe_finalize(struct device *dev) -{ - set_dma_ops(dev, NULL); - iommu_setup_dma_ops(dev, 0, U64_MAX); -} - static void intel_iommu_get_resv_regions(struct device *device, struct list_head *head) { @@ -4937,7 +4931,6 @@ const struct iommu_ops intel_iommu_ops = { .domain_alloc = intel_iommu_domain_alloc, .domain_alloc_user = intel_iommu_domain_alloc_user, .probe_device = intel_iommu_probe_device, - .probe_finalize = intel_iommu_probe_finalize, .release_device = intel_iommu_release_device, .get_resv_regions = intel_iommu_get_resv_regions, .device_group = intel_iommu_device_group, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 824989874dee..43f630d0530e 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -560,10 +560,10 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list if (list_empty(&group->entry)) list_add_tail(&group->entry, group_list); } - mutex_unlock(&group->mutex); - if (dev_is_pci(dev)) - iommu_dma_set_pci_32bit_workaround(dev); + iommu_setup_dma_ops(dev); + + mutex_unlock(&group->mutex); return 0; diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 9a5196f523de..d8eaa7ea380b 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -695,11 +695,6 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain, return size; } -static void s390_iommu_probe_finalize(struct device *dev) -{ - iommu_setup_dma_ops(dev, 0, U64_MAX); -} - struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev) { if (!zdev || !zdev->s390_domain) @@ -785,7 +780,6 @@ static const struct iommu_ops s390_iommu_ops = { .capable = s390_iommu_capable, .domain_alloc_paging = s390_domain_alloc_paging, .probe_device = s390_iommu_probe_device, - .probe_finalize = s390_iommu_probe_finalize, .release_device = s390_iommu_release_device, .device_group = generic_device_group, .pgsize_bitmap = SZ_4K, diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 9bcffdde6175..5a558456946b 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -998,15 +998,6 @@ static struct iommu_device *viommu_probe_device(struct device *dev) return ERR_PTR(ret); } -static void viommu_probe_finalize(struct device *dev) -{ -#ifndef CONFIG_ARCH_HAS_SETUP_DMA_OPS - /* First clear the DMA ops in case we're switching from a DMA domain */ - set_dma_ops(dev, NULL); - iommu_setup_dma_ops(dev, 0, U64_MAX); -#endif -} - static void viommu_release_device(struct device *dev) { struct viommu_endpoint *vdev = dev_iommu_priv_get(dev); @@ -1043,7 +1034,6 @@ static struct iommu_ops viommu_ops = { .capable = viommu_capable, .domain_alloc = viommu_domain_alloc, .probe_device = viommu_probe_device, - .probe_finalize = viommu_probe_finalize, .release_device = viommu_release_device, .device_group = viommu_device_group, .get_resv_regions = viommu_get_resv_regions, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 20ed3a4fc5e0..3a3bf8afb8ca 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -1283,9 +1283,6 @@ static inline void iommu_debugfs_setup(void) {} #ifdef CONFIG_IOMMU_DMA #include -/* Setup call for arch DMA mapping code */ -void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit); - int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr); @@ -1296,10 +1293,6 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg); struct msi_desc; struct msi_msg; -static inline void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit) -{ -} - static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) { return -ENODEV; From patchwork Wed Dec 13 17:18:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 13491675 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A929AAB; Wed, 13 Dec 2023 09:18:51 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 643181596; Wed, 13 Dec 2023 09:19:37 -0800 (PST) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7408B3F762; Wed, 13 Dec 2023 09:18:47 -0800 (PST) From: Robin Murphy To: Joerg Roedel , Christoph Hellwig Cc: Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Paul Walmsley , Palmer Dabbelt , Albert Ou , Lorenzo Pieralisi , Hanjun Guo , Sudeep Holla , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Suravee Suthikulpanit , David Woodhouse , Lu Baolu , Niklas Schnelle , Matthew Rosato , Gerald Schaefer , Jean-Philippe Brucker , Rob Herring , Frank Rowand , Marek Szyprowski , Jason Gunthorpe , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-acpi@vger.kernel.org, iommu@lists.linux.dev, devicetree@vger.kernel.org Subject: [PATCH v2 7/7] dma-mapping: Simplify arch_setup_dma_ops() Date: Wed, 13 Dec 2023 17:18:00 +0000 Message-Id: <7e6af148e0df9a10b51cd60234ec6b35f7d4cc20.1702486837.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The dma_base, size and iommu arguments are only used by ARM, and can now easily be deduced from the device itself, so there's no need to pass them through the callchain as well. Reviewed-by: Christoph Hellwig Signed-off-by: Robin Murphy Acked-by: Rob Herring --- v2: Make sure the ARM changes actually build (oops...) --- arch/arc/mm/dma.c | 3 +-- arch/arm/mm/dma-mapping-nommu.c | 3 +-- arch/arm/mm/dma-mapping.c | 18 ++++++++++-------- arch/arm64/mm/dma-mapping.c | 3 +-- arch/mips/mm/dma-noncoherent.c | 3 +-- arch/riscv/mm/dma-noncoherent.c | 3 +-- drivers/acpi/scan.c | 3 +-- drivers/hv/hv_common.c | 6 +----- drivers/of/device.c | 4 +--- include/linux/dma-map-ops.h | 6 ++---- 10 files changed, 20 insertions(+), 32 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 2a7fbbb83b70..6b85e94f3275 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -90,8 +90,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, /* * Plug in direct dma map ops. */ -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { /* * IOC hardware snoops all DMA traffic keeping the caches consistent diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index cfd9c933d2f0..97db5397c320 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -33,8 +33,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, } } -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { if (IS_ENABLED(CONFIG_CPU_V7M)) { /* diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 5409225b4abc..5ac482d7ff94 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1712,11 +1712,15 @@ void arm_iommu_detach_device(struct device *dev) } EXPORT_SYMBOL_GPL(arm_iommu_detach_device); -static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +static void arm_setup_iommu_dma_ops(struct device *dev) { struct dma_iommu_mapping *mapping; + u64 dma_base = 0, size = 1ULL << 32; + if (dev->dma_range_map) { + dma_base = dma_range_map_min(dev->dma_range_map); + size = dma_range_map_max(dev->dma_range_map) - dma_base; + } mapping = arm_iommu_create_mapping(dev->bus, dma_base, size); if (IS_ERR(mapping)) { pr_warn("Failed to create %llu-byte IOMMU mapping for device %s\n", @@ -1747,8 +1751,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev) #else -static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +static void arm_setup_iommu_dma_ops(struct device *dev) { } @@ -1756,8 +1759,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev) { } #endif /* CONFIG_ARM_DMA_USE_IOMMU */ -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { /* * Due to legacy code that sets the ->dma_coherent flag from a bus @@ -1776,8 +1778,8 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, if (dev->dma_ops) return; - if (iommu) - arm_setup_iommu_dma_ops(dev, dma_base, size, iommu, coherent); + if (device_iommu_mapped(dev)) + arm_setup_iommu_dma_ops(dev); xen_setup_dma_ops(dev); dev->archdata.dma_ops_setup = true; diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 96ff791199e8..0b320a25a471 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -46,8 +46,7 @@ void arch_teardown_dma_ops(struct device *dev) } #endif -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { int cls = cache_line_size_of_cpu(); diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c index 3c4fc97b9f39..ab4f2a75a7d0 100644 --- a/arch/mips/mm/dma-noncoherent.c +++ b/arch/mips/mm/dma-noncoherent.c @@ -137,8 +137,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, #endif #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { dev->dma_coherent = coherent; } diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index 4e4e469b8dd6..cb89d7e0ba88 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -128,8 +128,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) ALT_CMO_OP(FLUSH, flush_addr, size, riscv_cbom_block_size); } -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent) +void arch_setup_dma_ops(struct device *dev, bool coherent) { WARN_TAINT(!coherent && riscv_cbom_block_size > ARCH_DMA_MINALIGN, TAINT_CPU_OUT_OF_SPEC, diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index ee88a727f200..cad171fc31e8 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -1640,8 +1640,7 @@ int acpi_dma_configure_id(struct device *dev, enum dev_dma_attr attr, if (PTR_ERR(iommu) == -EPROBE_DEFER) return -EPROBE_DEFER; - arch_setup_dma_ops(dev, 0, U64_MAX, - iommu, attr == DEV_DMA_COHERENT); + arch_setup_dma_ops(dev, attr == DEV_DMA_COHERENT); return 0; } diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 4372f5d146ab..0e2decd1167a 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -484,11 +484,7 @@ EXPORT_SYMBOL_GPL(hv_query_ext_cap); void hv_setup_dma_ops(struct device *dev, bool coherent) { - /* - * Hyper-V does not offer a vIOMMU in the guest - * VM, so pass 0/NULL for the IOMMU settings - */ - arch_setup_dma_ops(dev, 0, 0, NULL, coherent); + arch_setup_dma_ops(dev, coherent); } EXPORT_SYMBOL_GPL(hv_setup_dma_ops); diff --git a/drivers/of/device.c b/drivers/of/device.c index 66879edb4a61..3394751015d3 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -96,7 +96,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, const struct iommu_ops *iommu; const struct bus_dma_region *map = NULL; struct device_node *bus_np; - u64 dma_start = 0; u64 mask, end = 0; bool coherent; int ret; @@ -118,7 +117,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, return ret == -ENODEV ? 0 : ret; } else { /* Determine the overall bounds of all DMA regions */ - dma_start = dma_range_map_min(map); end = dma_range_map_max(map); } @@ -167,7 +165,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, dev_dbg(dev, "device is%sbehind an iommu\n", iommu ? " " : " not "); - arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, iommu, coherent); + arch_setup_dma_ops(dev, coherent); if (!iommu) of_dma_set_restricted_buffer(dev, np); diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a52e508d1869..023f265eae2e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -426,11 +426,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, #endif #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, - const struct iommu_ops *iommu, bool coherent); +void arch_setup_dma_ops(struct device *dev, bool coherent); #else -static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, - u64 size, const struct iommu_ops *iommu, bool coherent) +static inline void arch_setup_dma_ops(struct device *dev, bool coherent) { } #endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */