From patchwork Wed May 24 17:18:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 187DEC7EE2D for ; Wed, 24 May 2023 17:19:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2RHgzttbM9rAEte1ji/gzHcir6wX8/rwZRwQyWjuV3Y=; b=ELSS65rM3k/bHm bhPSUHpfiKvi8A2dLbi0sobB+8s7sEsljgqGpRd7eVWPqvJOrngr753E77NqTkUYchTRaHWLNtN8c QOb21lcgF9O4NL6hbVFPbiA8GAEJ8dK3KKkChofT6300ri9bAwVBFeVfUo4MK/tzxlSzn5WRei3Ie 51oNt12DmI6N6K0UpAocz+VyuNOe59KTL0Xjhr3rwJ5B1u9LEAt0ZzEscDiMopaiWyewvOVoXm32z H7QeJ1Dv+snFnc3OO+kR35408DbeWN0ELrVeBryp5Iq9HEo3dpfTCNZH7oxTUOol8X3DCUD0jluva flFYeh5nMoEMmOvcoRyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s8w-00E8cb-1u; Wed, 24 May 2023 17:19:22 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s8p-00E8Zc-16 for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:16 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BBED363E21; Wed, 24 May 2023 17:19:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8063AC4339C; Wed, 24 May 2023 17:19:10 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 01/15] mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN Date: Wed, 24 May 2023 18:18:50 +0100 Message-Id: <20230524171904.3967031-2-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101915_452125_5BBCE06D X-CRM114-Status: GOOD ( 17.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting a kmalloc() minimum alignment smaller than the arch DMA alignment, decouple the two definitions. This requires that either the kmalloc() caches are aligned to a (run-time) cache-line size or the DMA API bounces unaligned kmalloc() allocations. Subsequent patches will implement both options. After this patch, ARCH_DMA_MINALIGN is expected to be used in static alignment annotations and defined by an architecture to be the maximum alignment for all supported configurations/SoCs in a single Image. Architectures opting in to a smaller ARCH_KMALLOC_MINALIGN will need to define its value in the arch headers. Since ARCH_DMA_MINALIGN is now always defined, adjust the #ifdef in dma_get_cache_alignment() so that there is no change for architectures not requiring a minimum DMA alignment. Signed-off-by: Catalin Marinas Cc: Andrew Morton Cc: Christoph Hellwig Cc: Robin Murphy --- include/linux/dma-mapping.h | 2 +- include/linux/slab.h | 14 +++++++++++--- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 0ee20b764000..3288a1339271 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -545,7 +545,7 @@ static inline int dma_set_min_align_mask(struct device *dev, static inline int dma_get_cache_alignment(void) { -#ifdef ARCH_DMA_MINALIGN +#ifdef ARCH_HAS_DMA_MINALIGN return ARCH_DMA_MINALIGN; #endif return 1; diff --git a/include/linux/slab.h b/include/linux/slab.h index 6b3e155b70bf..50dcf9cfbf62 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -235,12 +235,20 @@ void kmem_dump_obj(void *object); * alignment larger than the alignment of a 64-bit integer. * Setting ARCH_DMA_MINALIGN in arch headers allows that. */ -#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 +#ifdef ARCH_DMA_MINALIGN +#define ARCH_HAS_DMA_MINALIGN +#if ARCH_DMA_MINALIGN > 8 && !defined(ARCH_KMALLOC_MINALIGN) #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN -#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN -#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) +#endif #else +#define ARCH_DMA_MINALIGN __alignof__(unsigned long long) +#endif + +#ifndef ARCH_KMALLOC_MINALIGN #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) +#elif ARCH_KMALLOC_MINALIGN > 8 +#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN +#define KMALLOC_SHIFT_LOW ilog2(KMALLOC_MIN_SIZE) #endif /* From patchwork Wed May 24 17:18:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85115C77B73 for ; Wed, 24 May 2023 17:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d71bTaVdcpESi+A4p6paVr2qr1rjQ9RILTRjspEmOoA=; b=Lorp/dinnEWb4w LQShbU+lfdK/J+wgJeZPzmYZmGZkExEdnzFAsuag5fmRxruUgfDvUY2ZeK3Lnm00V1GGmRtyqrrmV RlPepOn2ndPABisOO3LRq3PeFg19Njm3cunxJA+r2iMKjY6Cl+fVTusyXCFO+4zndOwicVrLb++aF hZS8F9KV270rzNwcrPgvwmgHRpgX05WVaIZDROLfgzx31rDnVQXSui4HYDVWAxRurqL1vi1v8dWVS Cjzw4cAXWR9S9AVcJmbIAyjPB7Upd+yfsnSG0gQx2ei0IYrsQiZrAxOeR7yanOQsbRikh+npqThQT 1zfgm3T9UDybZ3jogCVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s8x-00E8cx-0S; Wed, 24 May 2023 17:19:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s8t-00E8bA-0q for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:20 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D6553638D6; Wed, 24 May 2023 17:19:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 932BDC433EF; Wed, 24 May 2023 17:19:14 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 02/15] dma: Allow dma_get_cache_alignment() to be overridden by the arch code Date: Wed, 24 May 2023 18:18:51 +0100 Message-Id: <20230524171904.3967031-3-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101919_333509_5C7D511B X-CRM114-Status: GOOD ( 13.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On arm64, ARCH_DMA_MINALIGN is larger than most cache line size configurations deployed. Allow an architecture to override dma_get_cache_alignment() in order to return a run-time probed value (e.g. cache_line_size()). Signed-off-by: Catalin Marinas Cc: Christoph Hellwig Cc: Robin Murphy Cc: Will Deacon Reviewed-by: Christoph Hellwig --- include/linux/dma-mapping.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 3288a1339271..c41019289223 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -543,6 +543,7 @@ static inline int dma_set_min_align_mask(struct device *dev, return 0; } +#ifndef dma_get_cache_alignment static inline int dma_get_cache_alignment(void) { #ifdef ARCH_HAS_DMA_MINALIGN @@ -550,6 +551,7 @@ static inline int dma_get_cache_alignment(void) #endif return 1; } +#endif static inline void *dmam_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) From patchwork Wed May 24 17:18:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B064C7EE2E for ; Wed, 24 May 2023 17:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rGUNZqW86/ZHTH1UuK8aKqicOKrxAQ+tCvcvj+Guu2M=; b=ExrYZKrAUmgIVT 3QM0Dvw2ts/AriGFdYGrSaSYqEh1JqCtz77tPMyKZo3ADK+snDWPQzMnSfEwCoJ1MyFZuPYM4mK8O E06UIx04lO/CTfbKZjzmDcuMt0zfFLPpTIcOmxMqpWFNa3YVS8gfGdyAUh3kEJhVNK///eYsonWGS 2WBU8NXEWNz2loY0rwbuqPeK8fMlO3ac5paoOpiE2Eolr++a6j5capS4rl0Ta0qKwLz2GdT2TO7cW VxnRZp4qsDDbc644BobVlS7Ry2DBWb9or+iCKKVl/ntyD7zYtxjQUqqK+T7uZeR5hZVH3uWioXuYQ qDJVrL5qf9e3SbRz47/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9C-00E8mW-0j; Wed, 24 May 2023 17:19:38 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s8x-00E8cw-1F for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:24 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E6D3E63F94; Wed, 24 May 2023 17:19:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7145C4339B; Wed, 24 May 2023 17:19:18 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 03/15] mm/slab: Simplify create_kmalloc_cache() args and make it static Date: Wed, 24 May 2023 18:18:52 +0100 Message-Id: <20230524171904.3967031-4-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101923_510567_2724930D X-CRM114-Status: GOOD ( 17.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the slab variant of kmem_cache_init(), call new_kmalloc_cache() instead of initialising the kmalloc_caches array directly. With this, create_kmalloc_cache() is now only called from new_kmalloc_cache() in the same file, so make it static. In addition, the useroffset argument is always 0 while usersize is the same as size. Remove them. Signed-off-by: Catalin Marinas Cc: Andrew Morton --- mm/slab.c | 6 +----- mm/slab.h | 5 ++--- mm/slab_common.c | 14 ++++++-------- 3 files changed, 9 insertions(+), 16 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index bb57f7fdbae1..b7817dcba63e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1240,11 +1240,7 @@ void __init kmem_cache_init(void) * Initialize the caches that provide memory for the kmem_cache_node * structures first. Without this, further allocations will bug. */ - kmalloc_caches[KMALLOC_NORMAL][INDEX_NODE] = create_kmalloc_cache( - kmalloc_info[INDEX_NODE].name[KMALLOC_NORMAL], - kmalloc_info[INDEX_NODE].size, - ARCH_KMALLOC_FLAGS, 0, - kmalloc_info[INDEX_NODE].size); + new_kmalloc_cache(INDEX_NODE, KMALLOC_NORMAL, ARCH_KMALLOC_FLAGS); slab_state = PARTIAL_NODE; setup_kmalloc_cache_index_table(); diff --git a/mm/slab.h b/mm/slab.h index f01ac256a8f5..592590fcddae 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -255,9 +255,8 @@ gfp_t kmalloc_fix_flags(gfp_t flags); /* Functions provided by the slab allocators */ int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags); -struct kmem_cache *create_kmalloc_cache(const char *name, unsigned int size, - slab_flags_t flags, unsigned int useroffset, - unsigned int usersize); +void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, + slab_flags_t flags); extern void create_boot_cache(struct kmem_cache *, const char *name, unsigned int size, slab_flags_t flags, unsigned int useroffset, unsigned int usersize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 607249785c07..7f069159aee2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -658,17 +658,16 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, s->refcount = -1; /* Exempt from merging for now */ } -struct kmem_cache *__init create_kmalloc_cache(const char *name, - unsigned int size, slab_flags_t flags, - unsigned int useroffset, unsigned int usersize) +static struct kmem_cache *__init create_kmalloc_cache(const char *name, + unsigned int size, + slab_flags_t flags) { struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); if (!s) panic("Out of memory when creating slab %s\n", name); - create_boot_cache(s, name, size, flags | SLAB_KMALLOC, useroffset, - usersize); + create_boot_cache(s, name, size, flags | SLAB_KMALLOC, 0, size); list_add(&s->list, &slab_caches); s->refcount = 1; return s; @@ -863,7 +862,7 @@ void __init setup_kmalloc_cache_index_table(void) } } -static void __init +void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { @@ -880,8 +879,7 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) kmalloc_caches[type][idx] = create_kmalloc_cache( kmalloc_info[idx].name[type], - kmalloc_info[idx].size, flags, 0, - kmalloc_info[idx].size); + kmalloc_info[idx].size, flags); /* * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for From patchwork Wed May 24 17:18:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05C34C77B73 for ; Wed, 24 May 2023 17:20:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j73sHG5DHkAvugsmplhLW++7Hnl8bCqCcavlnFfKtkk=; b=vtAKewFX35bR4+ hHk2S+8fHUgZp5jhZChaF32qRpjRakYsod4LyDptJ1WU6EcQdH4wmjlyaIETqzfbzG8KWnLCK5hZh HS8xBFrSbLB48SkTvjVcfo8EchUy4e2AgDFJ5KbBmoi+fl7GTTKlPdmwDBaG/02Hkg2CUOVseWbr7 jiopVzuCxNA/HvFcBqfhDbY03V8deuaqRMGC5ijfuUGvszl01P4g5nWWoKoG1YCt/ZPw5nw0Ijsdn tXwubR+4NcJhTGKYP4dBnRdbSd8xrEoV8eKJF6jhphntqb/OlyKwQ+/Ne3JETNhXiPHF2M0mcIirb PZTNWkCTv+TRjLeMWqZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9C-00E8my-2U; Wed, 24 May 2023 17:19:38 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s91-00E8fX-1W for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:30 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0CBFB638D6; Wed, 24 May 2023 17:19:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BAAE6C433EF; Wed, 24 May 2023 17:19:22 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 04/15] mm/slab: Limit kmalloc() minimum alignment to dma_get_cache_alignment() Date: Wed, 24 May 2023 18:18:53 +0100 Message-Id: <20230524171904.3967031-5-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101927_587720_C91B6C69 X-CRM114-Status: GOOD ( 18.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Do not create kmalloc() caches which are not aligned to dma_get_cache_alignment(). There is no functional change since for current architectures defining ARCH_DMA_MINALIGN, ARCH_KMALLOC_MINALIGN equals ARCH_DMA_MINALIGN (and dma_get_cache_alignment()). On architectures without a specific ARCH_DMA_MINALIGN, dma_get_cache_alignment() is 1, so no change to the kmalloc() caches. If an architecture selects ARCH_HAS_DMA_CACHE_LINE_SIZE (introduced previously), the kmalloc() caches will be aligned to a cache line size. Signed-off-by: Catalin Marinas Cc: Andrew Morton Cc: Christoph Hellwig Cc: Robin Murphy --- mm/slab_common.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 7f069159aee2..7c6475847fdf 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -862,9 +863,18 @@ void __init setup_kmalloc_cache_index_table(void) } } +static unsigned int __kmalloc_minalign(void) +{ + return dma_get_cache_alignment(); +} + void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { + unsigned int minalign = __kmalloc_minalign(); + unsigned int aligned_size = kmalloc_info[idx].size; + int aligned_idx = idx; + if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) { flags |= SLAB_RECLAIM_ACCOUNT; } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { @@ -877,9 +887,17 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) flags |= SLAB_CACHE_DMA; } - kmalloc_caches[type][idx] = create_kmalloc_cache( - kmalloc_info[idx].name[type], - kmalloc_info[idx].size, flags); + if (minalign > ARCH_KMALLOC_MINALIGN) { + aligned_size = ALIGN(aligned_size, minalign); + aligned_idx = __kmalloc_index(aligned_size, false); + } + + if (!kmalloc_caches[type][aligned_idx]) + kmalloc_caches[type][aligned_idx] = create_kmalloc_cache( + kmalloc_info[aligned_idx].name[type], + aligned_size, flags); + if (idx != aligned_idx) + kmalloc_caches[type][idx] = kmalloc_caches[type][aligned_idx]; /* * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for From patchwork Wed May 24 17:18:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 08934C7EE2D for ; Wed, 24 May 2023 17:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kSyA8prpF3uHLW4gwnuEfnf0w37Fw/1Lru7ktGCQnMc=; b=pwToowU44lwNDm uKazhP/W6BlchjvwCkq5eWu8zlSXVBBoJ7aB8FnhjNN1C+5mbbHAZQPnHY/UTMMwzJAhXjNgfqB0y P0pZt7087nzQ//1rV5uTbDs3mGm4bGoNVugpIK49Txu4UoawcjT1o6oYV6sbAFK4GQguHu2C0pIw/ QHkXTrO3ghR9tVu3hsz69DBrhAbfurTIv/9LxRMjVwsqf0VNg4DrdrtT6pAp4H0h960PAvOnUSPKS FOCwu1H/yyZ63Vjjrc4d6fZG3uv72HEXzoUwwa0/nzmHVI94kEMgrjuFajdFYM7VaSYvl0lGhrzA1 ru9GmRHD80PMI7YUVL7Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9D-00E8nT-0y; Wed, 24 May 2023 17:19:39 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s95-00E8iM-1q for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:33 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 21A0463E8C; Wed, 24 May 2023 17:19:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D422AC4339B; Wed, 24 May 2023 17:19:26 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 05/15] drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Date: Wed, 24 May 2023 18:18:54 +0100 Message-Id: <20230524171904.3967031-6-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101931_666941_922243B4 X-CRM114-Status: GOOD ( 14.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARCH_DMA_MINALIGN represents the minimum (static) alignment for safe DMA operations while ARCH_KMALLOC_MINALIGN is the minimum kmalloc() objects alignment. Signed-off-by: Catalin Marinas Acked-by: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" --- drivers/base/devres.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/base/devres.c b/drivers/base/devres.c index 5c998cfac335..3df0025d12aa 100644 --- a/drivers/base/devres.c +++ b/drivers/base/devres.c @@ -29,10 +29,10 @@ struct devres { * Some archs want to perform DMA into kmalloc caches * and need a guaranteed alignment larger than * the alignment of a 64-bit integer. - * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same - * buffer alignment as if it was allocated by plain kmalloc(). + * Thus we use ARCH_DMA_MINALIGN for data[] which will force the same + * alignment for struct devres when allocated by kmalloc(). */ - u8 __aligned(ARCH_KMALLOC_MINALIGN) data[]; + u8 __aligned(ARCH_DMA_MINALIGN) data[]; }; struct devres_group { From patchwork Wed May 24 17:18:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA8BDC77B7C for ; Wed, 24 May 2023 17:20:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rID3ZfXI+jQZsG8IcXC56CbfgMKJZhEo/NdK5yU+vmQ=; b=NVFq/61oHOqWEa X0QD63LFbode7B3y4Xf42lcg5fqHqtEm815iHaXMZkwU5qMI4g1XJC2+05Yx84fa4Q+xQVRDtXNoV 73W1NBn3FeH8Qi4dkRXV7Bc9Bwl5fHXEKwc5KouJRaVORdywWedrburS4u+KLAUi7cI45DaQg/9aF dGo2w5HPt8/wkBbDhMCHyBJ7nGs2eWTDsCDLHxvFRum++8d5CEVokL4B8bwhRx6kEjDnCqwCnJ94M nfD5YAFxEyXSkp2ce9Q40mUUKMrZaALga3hYacruN/mqAagkTl+a+YaKK2AXKW5WZ7H00BPKgCLaa 4RuJLdo2sC2zmzcuJ4nQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9D-00E8oB-34; Wed, 24 May 2023 17:19:39 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9A-00E8ko-0K for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:37 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3A7E163E28; Wed, 24 May 2023 17:19:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6EE9C433A0; Wed, 24 May 2023 17:19:30 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 06/15] drivers/gpu: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Date: Wed, 24 May 2023 18:18:55 +0100 Message-Id: <20230524171904.3967031-7-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101936_176441_39548171 X-CRM114-Status: GOOD ( 14.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARCH_DMA_MINALIGN represents the minimum (static) alignment for safe DMA operations while ARCH_KMALLOC_MINALIGN is the minimum kmalloc() objects alignment. Signed-off-by: Catalin Marinas Cc: Daniel Vetter --- drivers/gpu/drm/drm_managed.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_managed.c b/drivers/gpu/drm/drm_managed.c index 4cf214de50c4..3a5802f60e65 100644 --- a/drivers/gpu/drm/drm_managed.c +++ b/drivers/gpu/drm/drm_managed.c @@ -49,10 +49,10 @@ struct drmres { * Some archs want to perform DMA into kmalloc caches * and need a guaranteed alignment larger than * the alignment of a 64-bit integer. - * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same - * buffer alignment as if it was allocated by plain kmalloc(). + * Thus we use ARCH_DMA_MINALIGN for data[] which will force the same + * alignment for struct drmres when allocated by kmalloc(). */ - u8 __aligned(ARCH_KMALLOC_MINALIGN) data[]; + u8 __aligned(ARCH_DMA_MINALIGN) data[]; }; static void free_dr(struct drmres *dr) From patchwork Wed May 24 17:18:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 130D0C77B7C for ; Wed, 24 May 2023 17:20:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iR1wClNVM+Kblfer3phOI7rP+Bzum5BD1xx7rcHNJPA=; b=q8AOQi2VNZ0R49 19Kh/MlItAUXCVE15NXjF/J01s34E+28rt/MmxYoSqgMcnIIc2+RzWCeL+qHZMs5wDDHuWRcYrQiK x4/sSeI4Q0jqL1flpX/1DPBqDnHLV6P5ESQOagy0+9SDBG2t1NJjzt8epo1nnZImo8jg9keKMXwpU alt/JLoX10taFwFOIweIVOGHjV+sTzRfZNI1/KaaBM4Jyf2LS+/N0K6LJEktbcBqP3BV9sUcimGRE 4D1EWJorOrOGHEKLcSjiZS+hvmUZCsGsrlI+rpQfd8nxE6GXBu6NbMvlH//vB77vdM63HFodLUpUU 5RX9losi1ysKX8MS+hnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9f-00E99S-2H; Wed, 24 May 2023 17:20:07 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9D-00E8ni-2K for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:41 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 54378638B6; Wed, 24 May 2023 17:19:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AFB4C433D2; Wed, 24 May 2023 17:19:34 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 07/15] drivers/usb: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Date: Wed, 24 May 2023 18:18:56 +0100 Message-Id: <20230524171904.3967031-8-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101939_806250_AA3D791F X-CRM114-Status: GOOD ( 13.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARCH_DMA_MINALIGN represents the minimum (static) alignment for safe DMA operations while ARCH_KMALLOC_MINALIGN is the minimum kmalloc() objects alignment. Signed-off-by: Catalin Marinas Acked-by: Greg Kroah-Hartman --- drivers/usb/core/buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c index fbb087b728dc..e21d8d106977 100644 --- a/drivers/usb/core/buffer.c +++ b/drivers/usb/core/buffer.c @@ -34,13 +34,13 @@ void __init usb_init_pool_max(void) { /* * The pool_max values must never be smaller than - * ARCH_KMALLOC_MINALIGN. + * ARCH_DMA_MINALIGN. */ - if (ARCH_KMALLOC_MINALIGN <= 32) + if (ARCH_DMA_MINALIGN <= 32) ; /* Original value is okay */ - else if (ARCH_KMALLOC_MINALIGN <= 64) + else if (ARCH_DMA_MINALIGN <= 64) pool_max[0] = 64; - else if (ARCH_KMALLOC_MINALIGN <= 128) + else if (ARCH_DMA_MINALIGN <= 128) pool_max[0] = 0; /* Don't use this pool */ else BUILD_BUG(); /* We don't allow this */ From patchwork Wed May 24 17:18:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA4DBC7EE2D for ; Wed, 24 May 2023 17:20:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RGtHoJahP4zq6cRyyrPE0arMsTLH6RA1Z7C7hLoWuX0=; b=31wERt0lbt3Yne bXdyhh7tj80KDYkD4uUebAdF0RJmm8Hx1V9w9WFw0adOCYvN1X+x03EGxJyC1nQzLgg6mq6x7WaJh NEEuuWqKypbKKniYlm3ZJltpryu8cPhM5bQAvQ9KX71NlTn8HPSYxTn4pjOVMxn1oJzdJJtJHL3mc Awa0xorqA0caiax4xif3HyC0+v6HoTENi4zWFcY3zulbazo9NYDXKa9Gwv5JqB8w+JZLB380keeEn Cbzz1bDP41sZR49i5TzENGt3bsb+yxoMgBfJ4fXJQMJDXvLQtpXa+fWVs2Qo0bYzh42LaKrJkanUz fsp0NyqU0V7NZMomqeqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9g-00E9AD-2j; Wed, 24 May 2023 17:20:08 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9H-00E8rs-2w for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:45 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6FD2663F99; Wed, 24 May 2023 17:19:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2979DC433EF; Wed, 24 May 2023 17:19:39 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 08/15] drivers/spi: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Date: Wed, 24 May 2023 18:18:57 +0100 Message-Id: <20230524171904.3967031-9-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101943_997147_164598A4 X-CRM114-Status: GOOD ( 14.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARCH_DMA_MINALIGN represents the minimum (static) alignment for safe DMA operations while ARCH_KMALLOC_MINALIGN is the minimum kmalloc() objects alignment. Signed-off-by: Catalin Marinas Acked-by: Mark Brown --- drivers/spi/spidev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c index 39d94c850839..8d009275a59d 100644 --- a/drivers/spi/spidev.c +++ b/drivers/spi/spidev.c @@ -237,7 +237,7 @@ static int spidev_message(struct spidev_data *spidev, /* Ensure that also following allocations from rx_buf/tx_buf will meet * DMA alignment requirements. */ - unsigned int len_aligned = ALIGN(u_tmp->len, ARCH_KMALLOC_MINALIGN); + unsigned int len_aligned = ALIGN(u_tmp->len, ARCH_DMA_MINALIGN); k_tmp->len = u_tmp->len; From patchwork Wed May 24 17:18:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EF44C77B7C for ; Wed, 24 May 2023 17:20:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PBjF/pxUci+d1aFBXmviLtsoHjmapoSsnrWYyiRpCEc=; b=jC3uja8R1BHVhP yZqSeFDlV6cj8a1XFFYLkTQSDUwtahuFsRaZsv4yuw1ujiSPvb+OiRR+z+rjLLs3yiEyJo414cLlX wYrB4seS82l9TcQ4hCGsk4KD1Zb2SXdIMOSjqTWIrjcPtYwDZGpuOV2H+l9lqAxEcqm7UYoKLP4S1 /XN1A+XUSeb2F6vV2fluMvCvFO55KJk58OSNfJS2h6rXzGOyHt0exjTP7aJGy7ZGxzOd16YvFhfwH 1ezXX3SZLrZxtrB0+ropnCphXIfhaIbexYwbF2cFhR91R/heigCanquZcssrvyO6nR7x6PjiEkOEU qLYXj5C7rnqF6LllW8Cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9h-00E9BK-2k; Wed, 24 May 2023 17:20:09 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9L-00E8vM-3A for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:49 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8C70263F87; Wed, 24 May 2023 17:19:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 444B1C433A8; Wed, 24 May 2023 17:19:43 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 09/15] drivers/md: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Date: Wed, 24 May 2023 18:18:58 +0100 Message-Id: <20230524171904.3967031-10-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101948_077953_D663A9D2 X-CRM114-Status: GOOD ( 13.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARCH_DMA_MINALIGN represents the minimum (static) alignment for safe DMA operations while ARCH_KMALLOC_MINALIGN is the minimum kmalloc() objects alignment. Signed-off-by: Catalin Marinas Cc: Alasdair Kergon Cc: Mike Snitzer --- drivers/md/dm-crypt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 8b47b913ee83..ebbd8f7db880 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3256,7 +3256,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) cc->per_bio_data_size = ti->per_io_data_size = ALIGN(sizeof(struct dm_crypt_io) + cc->dmreq_start + additional_req_size, - ARCH_KMALLOC_MINALIGN); + ARCH_DMA_MINALIGN); ret = mempool_init(&cc->page_pool, BIO_MAX_VECS, crypt_page_alloc, crypt_page_free, cc); if (ret) { From patchwork Wed May 24 17:18:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D740FC77B73 for ; Wed, 24 May 2023 17:20:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0zP1Zvbg29OKAWPp2qbCAk+EwQjfqK8GPbaTNF8++Ik=; b=Mp/1ZHShIFoEuY seBas376wu8Q0iCA1f9K0/dWsjPWjRW4wK7k1UmOGuzoFjykJ9vg0W/9SkFKijzVmFYem5eIEMOa3 Wn92XNzA5v3iruKl5rGi3chMKd2pDoy7bESlF5iP07kUqFvH/vMqh6QuqHYhXUz8NVAono3qK1VDP 9g2TvQpl0tOLrI0jr6K/B7IJL/ouOqjNy50m52Zfkgf/ZhUvNe0Zw0mVUg5OPQ8lkJbZ+U+rnDo/E OFaA2P1bl6MYJReWRp1W0YOfij0aYqo4MbfSllnVfB2O2h3NW1WzALuW1xXjx1HGus67vr5EDaInu JLe+utzZpaLmDVnTz39A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9i-00E9By-2C; Wed, 24 May 2023 17:20:10 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9Q-00E8yC-0D for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:53 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9E60163E27; Wed, 24 May 2023 17:19:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E95DC433A7; Wed, 24 May 2023 17:19:47 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 10/15] arm64: Allow kmalloc() caches aligned to the smaller cache_line_size() Date: Wed, 24 May 2023 18:18:59 +0100 Message-Id: <20230524171904.3967031-11-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101952_144551_6D873D75 X-CRM114-Status: GOOD ( 14.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On arm64, ARCH_DMA_MINALIGN is 128, larger than the cache line size on most of the current platforms (typically 64). Define ARCH_KMALLOC_MINALIGN to 8 (the default for architectures without their own ARCH_DMA_MINALIGN) and override dma_get_cache_alignment() to return cache_line_size(), probed at run-time. The kmalloc() caches will be limited to the cache line size. This will allow the additional kmalloc-{64,192} caches on most arm64 platforms. Signed-off-by: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/cache.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index a51e6e8f3171..ceb368d33bf4 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -33,6 +33,7 @@ * the CPU. */ #define ARCH_DMA_MINALIGN (128) +#define ARCH_KMALLOC_MINALIGN (8) #ifndef __ASSEMBLY__ @@ -90,6 +91,8 @@ static inline int cache_line_size_of_cpu(void) int cache_line_size(void); +#define dma_get_cache_alignment cache_line_size + /* * Read the effective value of CTR_EL0. * From patchwork Wed May 24 17:19:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2371CC7EE2D for ; Wed, 24 May 2023 17:20:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RhnYj8L71l+0Cs9dOQ4NnYCKog11SAIoYhM62zRTebA=; b=gZLLTtPb7UPOli C7qzqHAGBb7Y7T3TXOA8KAGQ5bATvAjs+SdJ7+8Wy2SA6sYTtkOcoxnofOqQoxBQmm2TGcmlGAk0J raClgw5SzEGM4fRzWVv78zWdMAQCgKWAGPL+zcK72lLcyPvfozvz5fGJSpLlgedZvWnfh4mCSCO8g zKSDbgR2SSYl0NJERbVrWgLcEuP4vMwZ5nHAhsHsB6dDyUbz8ApF+dLGv2E+LcOBY29NU50vNOL98 qbM37W4nlytXqevttv0Aa90vjPPRn4B4zahDmmZ7TFy7QCxglfh+l1w0jZnrgRbS1ssSCJNS75ORY KCcq8rtTpScXGlbNahsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9j-00E9D7-39; Wed, 24 May 2023 17:20:11 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9U-00E919-1K for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:19:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EA09D63C6C; Wed, 24 May 2023 17:19:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 768AFC433A8; Wed, 24 May 2023 17:19:51 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 11/15] scatterlist: Add dedicated config for DMA flags Date: Wed, 24 May 2023 18:19:00 +0100 Message-Id: <20230524171904.3967031-12-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_101956_492171_8204A370 X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Robin Murphy The DMA flags field will be useful for users beyond PCI P2P, so upgrade to its own dedicated config option. Signed-off-by: Robin Murphy Signed-off-by: Catalin Marinas Reviewed-by: Christoph Hellwig --- drivers/pci/Kconfig | 1 + include/linux/scatterlist.h | 4 ++-- kernel/dma/Kconfig | 3 +++ 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 9309f2469b41..3c07d8d214b3 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -168,6 +168,7 @@ config PCI_P2PDMA # depends on 64BIT select GENERIC_ALLOCATOR + select NEED_SG_DMA_FLAGS help Enableѕ drivers to do PCI peer-to-peer transactions to and from BARs that are exposed in other devices that are the part of diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 375a5e90d86a..87aaf8b5cdb4 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -16,7 +16,7 @@ struct scatterlist { #ifdef CONFIG_NEED_SG_DMA_LENGTH unsigned int dma_length; #endif -#ifdef CONFIG_PCI_P2PDMA +#ifdef CONFIG_NEED_SG_DMA_FLAGS unsigned int dma_flags; #endif }; @@ -249,7 +249,7 @@ static inline void sg_unmark_end(struct scatterlist *sg) } /* - * CONFGI_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes + * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). * Use this padding for DMA flags bits to indicate when a specific * dma address is a bus address. diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 6677d0e64d27..acc6f231259c 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -24,6 +24,9 @@ config DMA_OPS_BYPASS config ARCH_HAS_DMA_MAP_DIRECT bool +config NEED_SG_DMA_FLAGS + bool + config NEED_SG_DMA_LENGTH bool From patchwork Wed May 24 17:19:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68D66C77B7C for ; Wed, 24 May 2023 17:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CexTJ52GGoE3x48qsqzMosNYTyr0S28RymqeaPqXkgI=; b=SUqmw1NFi6BqyL h0VqJq+W1OMaiBUGv3gvprkIAg4e3OmN3gq7THzOcj7KEqoNUJWvEv//fkTfkDB+/eK/8y3U+kpKw vXGNNXcBlrAXeYtWhjzYErWfmLq72ZnrU9LqobvU89R6flNugyBAqCHj4Q7CEZFD/UVJL/UTQq/5m KOX4BEwveRbw94s/hAIy1e0gugKDdeX8O3vnI7P7RldgDpFw5+aXoc4JVnR4qE4KVKSavVeBhINgN oj422r+P6PsyjIu0eiAHpTcu1vqne2nl7X571gUzA2l++UnnY5kI1RxcDAkqZef5SOdGxo5EsuIIc H+B8AAjKXo58e7IPnyjg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9l-00E9Ee-1q; Wed, 24 May 2023 17:20:13 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9Y-00E93t-0w for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:20:02 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D8365638D6; Wed, 24 May 2023 17:19:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9AC60C433D2; Wed, 24 May 2023 17:19:55 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 12/15] dma-mapping: Force bouncing if the kmalloc() size is not cache-line-aligned Date: Wed, 24 May 2023 18:19:01 +0100 Message-Id: <20230524171904.3967031-13-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_102000_423843_D9712E94 X-CRM114-Status: GOOD ( 24.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For direct DMA, if the size is small enough to have originated from a kmalloc() cache below ARCH_DMA_MINALIGN, check its alignment against dma_get_cache_alignment() and bounce if necessary. For larger sizes, it is the responsibility of the DMA API caller to ensure proper alignment. At this point, the kmalloc() caches are properly aligned but this will change in a subsequent patch. Architectures can opt in by selecting ARCH_WANT_KMALLOC_DMA_BOUNCE. Signed-off-by: Catalin Marinas Reviewed-by: Christoph Hellwig Cc: Robin Murphy Reviewed-by: Robin Murphy --- include/linux/dma-map-ops.h | 61 +++++++++++++++++++++++++++++++++++++ kernel/dma/Kconfig | 4 +++ kernel/dma/direct.h | 3 +- 3 files changed, 67 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 31f114f486c4..9bf19b5bf755 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -8,6 +8,7 @@ #include #include +#include struct cma; @@ -277,6 +278,66 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ +/* + * Check whether potential kmalloc() buffers are safe for non-coherent DMA. + */ +static inline bool dma_kmalloc_safe(struct device *dev, + enum dma_data_direction dir) +{ + /* + * If DMA bouncing of kmalloc() buffers is disabled, the kmalloc() + * caches have already been aligned to a DMA-safe size. + */ + if (!IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC)) + return true; + + /* + * kmalloc() buffers are DMA-safe irrespective of size if the device + * is coherent or the direction is DMA_TO_DEVICE (non-desctructive + * cache maintenance and benign cache line evictions). + */ + if (dev_is_dma_coherent(dev) || dir == DMA_TO_DEVICE) + return true; + + return false; +} + +/* + * Check whether the given size, assuming it is for a kmalloc()'ed buffer, is + * sufficiently aligned for non-coherent DMA. + */ +static inline bool dma_kmalloc_size_aligned(size_t size) +{ + /* + * Larger kmalloc() sizes are guaranteed to be aligned to + * ARCH_DMA_MINALIGN. + */ + if (size >= 2 * ARCH_DMA_MINALIGN || + IS_ALIGNED(kmalloc_size_roundup(size), dma_get_cache_alignment())) + return true; + + return false; +} + +/* + * Check whether the given object size may have originated from a kmalloc() + * buffer with a slab alignment below the DMA-safe alignment and needs + * bouncing for non-coherent DMA. The pointer alignment is not considered and + * in-structure DMA-safe offsets are the responsibility of the caller. Such + * code should use the static ARCH_DMA_MINALIGN for compiler annotations. + * + * The heuristics can have false positives, bouncing unnecessarily, though the + * buffers would be small. False negatives are theoretically possible if, for + * example, multiple small kmalloc() buffers are coalesced into a larger + * buffer that passes the alignment check. There are no such known constructs + * in the kernel. + */ +static inline bool dma_kmalloc_needs_bounce(struct device *dev, size_t size, + enum dma_data_direction dir) +{ + return !dma_kmalloc_safe(dev, dir) && !dma_kmalloc_size_aligned(size); +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index acc6f231259c..abea1823fe21 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -90,6 +90,10 @@ config SWIOTLB bool select NEED_DMA_MAP_STATE +config DMA_BOUNCE_UNALIGNED_KMALLOC + bool + depends on SWIOTLB + config DMA_RESTRICTED_POOL bool "DMA Restricted Pool" depends on OF && OF_RESERVED_MEM && SWIOTLB diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e38ffc5e6bdd..97ec892ea0b5 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -94,7 +94,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, return swiotlb_map(dev, phys, size, dir, attrs); } - if (unlikely(!dma_capable(dev, dma_addr, size, true))) { + if (unlikely(!dma_capable(dev, dma_addr, size, true)) || + dma_kmalloc_needs_bounce(dev, size, dir)) { if (is_pci_p2pdma_page(page)) return DMA_MAPPING_ERROR; if (is_swiotlb_active(dev)) From patchwork Wed May 24 17:19:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5ECFBC77B73 for ; Wed, 24 May 2023 17:21:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2JRfMynlsdcAeIF/siGVFNRBMQpVhyjKXmdQAU3mhVA=; b=faXfuWbnW25Gtu 5iccdxOKbzJGiZWEPUN8HIRbMgDhvwATXRMmUv1TN57VOz1waGzXjjD0xXgh+pEJ4H9/Ol/iV0I+B dGyp+zAdtoHV6ICXy+xipwfUZDpihFtJeEQuBeniFv6SKG2U+w3M7cEZ97ckGKNGTA86T1nkfgz27 6zqaCazdZBjTSig5RV64y/CI04LMJLhi2SMJYv92VXamEo2j0Vrr/sCK9QcN74ToiniIjiijffp3F BbTaG55vUTOrQh3ccFzow67XxQirjiYqHKdKvBpABs4DOJE0bL91bDsdF+Zr2QtUe0yATYQDSmAKf Q4Dq0tJ+dpnN0yMnc5rA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1sAJ-00E9Zy-2Z; Wed, 24 May 2023 17:20:47 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9c-00E975-20 for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:20:07 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2580B63FA4; Wed, 24 May 2023 17:20:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACB15C433B0; Wed, 24 May 2023 17:19:59 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned Date: Wed, 24 May 2023 18:19:02 +0100 Message-Id: <20230524171904.3967031-14-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_102004_749703_D17816D7 X-CRM114-Status: GOOD ( 23.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similarly to the direct DMA, bounce small allocations as they may have originated from a kmalloc() cache not safe for DMA. Unlike the direct DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all non-coherent devices as this would break some cases where the iova is expected to be contiguous (dmabuf). Instead, scan the scatterlist for any small sizes and only go the swiotlb path if any element of the list needs bouncing (note that iommu_dma_map_page() would still only bounce those buffers which are not DMA-aligned). To avoid scanning the scatterlist on the 'sync' operations, introduce an SG_DMA_USE_SWIOTLB flag set by iommu_dma_map_sg_swiotlb(). The dev_use_swiotlb() function together with the newly added dev_use_sg_swiotlb() now check for both untrusted devices and unaligned kmalloc() buffers (suggested by Robin Murphy). Signed-off-by: Catalin Marinas Cc: Joerg Roedel Cc: Christoph Hellwig Cc: Robin Murphy Reviewed-by: Robin Murphy --- drivers/iommu/Kconfig | 1 + drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++------- include/linux/scatterlist.h | 25 +++++++++++++++++-- 3 files changed, 65 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index db98c3f86e8c..670eff7a8e11 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -152,6 +152,7 @@ config IOMMU_DMA select IOMMU_IOVA select IRQ_MSI_IOMMU select NEED_SG_DMA_LENGTH + select NEED_SG_DMA_FLAGS if SWIOTLB # Shared Virtual Addressing config IOMMU_SVA diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9f0b0bddbd..24a8b8c2368c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -520,9 +520,38 @@ static bool dev_is_untrusted(struct device *dev) return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; } -static bool dev_use_swiotlb(struct device *dev) +static bool dev_use_swiotlb(struct device *dev, size_t size, + enum dma_data_direction dir) { - return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev); + return IS_ENABLED(CONFIG_SWIOTLB) && + (dev_is_untrusted(dev) || + dma_kmalloc_needs_bounce(dev, size, dir)); +} + +static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir) +{ + struct scatterlist *s; + int i; + + if (!IS_ENABLED(CONFIG_SWIOTLB)) + return false; + + if (dev_is_untrusted(dev)) + return true; + + /* + * If kmalloc() buffers are not DMA-safe for this device and + * direction, check the individual lengths in the sg list. If any + * element is deemed unsafe, use the swiotlb for bouncing. + */ + if (!dma_kmalloc_safe(dev, dir)) { + for_each_sg(sg, s, nents, i) + if (!dma_kmalloc_size_aligned(s->length)) + return true; + } + + return false; } /** @@ -922,7 +951,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -938,7 +967,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev)) + if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); @@ -956,7 +985,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg; int i; - if (dev_use_swiotlb(dev)) + if (sg_is_dma_use_swiotlb(sgl)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), sg->length, dir); @@ -972,7 +1001,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg; int i; - if (dev_use_swiotlb(dev)) + if (sg_is_dma_use_swiotlb(sgl)) for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_device(dev, sg_dma_address(sg), @@ -998,7 +1027,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, * If both the physical buffer start address and size are * page aligned, we don't need to use a bounce page. */ - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { + if (dev_use_swiotlb(dev, size, dir) && + iova_offset(iovad, phys | size)) { void *padding_start; size_t padding_size, aligned_size; @@ -1166,6 +1196,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, struct scatterlist *s; int i; + sg_dma_mark_use_swiotlb(sg); + for_each_sg(sg, s, nents, i) { sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s), s->offset, s->length, dir, attrs); @@ -1210,7 +1242,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, goto out; } - if (dev_use_swiotlb(dev)) + if (dev_use_sg_swiotlb(dev, sg, nents, dir)) return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) @@ -1315,7 +1347,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, struct scatterlist *tmp; int i; - if (dev_use_swiotlb(dev)) { + if (sg_is_dma_use_swiotlb(sg)) { iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); return; } diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 87aaf8b5cdb4..330a157c5501 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -248,6 +248,29 @@ static inline void sg_unmark_end(struct scatterlist *sg) sg->page_link &= ~SG_END; } +#define SG_DMA_BUS_ADDRESS (1 << 0) +#define SG_DMA_USE_SWIOTLB (1 << 1) + +#ifdef CONFIG_SWIOTLB +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) +{ + return sg->dma_flags & SG_DMA_USE_SWIOTLB; +} + +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) +{ + sg->dma_flags |= SG_DMA_USE_SWIOTLB; +} +#else +static inline bool sg_is_dma_use_swiotlb(struct scatterlist *sg) +{ + return false; +} +static inline void sg_dma_mark_use_swiotlb(struct scatterlist *sg) +{ +} +#endif + /* * CONFIG_PCI_P2PDMA depends on CONFIG_64BIT which means there is 4 bytes * in struct scatterlist (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). @@ -256,8 +279,6 @@ static inline void sg_unmark_end(struct scatterlist *sg) */ #ifdef CONFIG_PCI_P2PDMA -#define SG_DMA_BUS_ADDRESS (1 << 0) - /** * sg_dma_is_bus address - Return whether a given segment was marked * as a bus address From patchwork Wed May 24 17:19:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CFB8C77B73 for ; Wed, 24 May 2023 17:21:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oUcwCbifiAq96KyNzJ7BwSJs8Isxydev8Thh2vsHyaI=; b=IL107rnhweRY3z wrrhyEiHnIImET5wuD+HfRw5XU/q9D91U7WTh0fXTHiEiaXopDRJyTKSB9YtTYxTPeHzvLzjbSUh5 ZDzd4lFT4FNpcYsIZT/hXsO1Bb9Y5t68vvWn9uYrg9+6m3SJgtsEg7OL40Ur6g4LWNNg4jAzDrmoP oAS2irnIyr83nHuBU/eNckxGPjoxwkKIcZ2yDIBgWyNGEJPRNAeh1ZYpYL9Hrwz+r4H38sOHX275B iCl5nqYVW5fmJb7figdYeJuOO7FzLNAHxycIwZyauYWD2lIz0SwbMGNvVBACamaawDMw9XgW6CHQb n09KYehe5udvAWMCW2bw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1sAK-00E9ac-1R; Wed, 24 May 2023 17:20:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9g-00E99r-25 for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:20:10 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 23C8363F80; Wed, 24 May 2023 17:20:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC7BDC4339B; Wed, 24 May 2023 17:20:03 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 14/15] mm: slab: Reduce the kmalloc() minimum alignment if DMA bouncing possible Date: Wed, 24 May 2023 18:19:03 +0100 Message-Id: <20230524171904.3967031-15-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_102008_746103_C8441FAE X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If an architecture opted in to DMA bouncing of unaligned kmalloc() buffers (ARCH_WANT_KMALLOC_DMA_BOUNCE), reduce the minimum kmalloc() cache alignment below cache-line size to ARCH_KMALLOC_MINALIGN. Signed-off-by: Catalin Marinas Cc: Andrew Morton Cc: Christoph Hellwig Cc: Robin Murphy --- mm/slab_common.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/slab_common.c b/mm/slab_common.c index 7c6475847fdf..fe46459a8b77 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -863,10 +864,19 @@ void __init setup_kmalloc_cache_index_table(void) } } +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC +static unsigned int __kmalloc_minalign(void) +{ + if (io_tlb_default_mem.nslabs) + return ARCH_KMALLOC_MINALIGN; + return dma_get_cache_alignment(); +} +#else static unsigned int __kmalloc_minalign(void) { return dma_get_cache_alignment(); } +#endif void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) From patchwork Wed May 24 17:19:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 13254367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33655C77B7C for ; Wed, 24 May 2023 17:21:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FYenWDLa+VwUOEdIym5Fjya30t/Niht3vZ15Q0oLKXI=; b=cVuC3T9QVut8MA w7QIs+wPch7tICRMrnUuSpBaoZLciOMJxnzOhEVGCQ35JerMldTBqLHfRSQT8cVwf+SURZkdnn56i QO6V2J3n9HCKdl7sKEAMuuI7ME5B+J/7xSAihYXNrBwV2fDy8YsVxOgmDnmbHPvsGU0zadgmqhD4W apZ4ZiAi2RILe6fUen7SSeZDKTVC1OMkdiUDjZBo9n0F6o8iUqGGl5fdFz/NrnUpAnxO+I3DDSJ4K 6MpCqyBMgiWE6gxhnjevd3+CcGRo9E45/6XKnYZ6gW6hdYNXkpo/4tdrjgbMTPfZkTfZ7T9jXFTlc B4oaIvcN6MFvevafrxoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1sAI-00E9Yy-1L; Wed, 24 May 2023 17:20:46 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1s9k-00E9Dl-2U for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 17:20:14 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3CA2E638B6; Wed, 24 May 2023 17:20:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEE70C433AC; Wed, 24 May 2023 17:20:07 +0000 (UTC) From: Catalin Marinas To: Linus Torvalds , Christoph Hellwig , Robin Murphy Cc: Arnd Bergmann , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 15/15] arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64 Date: Wed, 24 May 2023 18:19:04 +0100 Message-Id: <20230524171904.3967031-16-catalin.marinas@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230524171904.3967031-1-catalin.marinas@arm.com> References: <20230524171904.3967031-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_102012_923631_1DBA91A3 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With the DMA bouncing of unaligned kmalloc() buffers now in place, enable it for arm64 to allow the kmalloc-{8,16,32,48,96} caches. In addition, always create the swiotlb buffer even when the end of RAM is within the 32-bit physical address range (the swiotlb buffer can still be disabled on the kernel command line). Signed-off-by: Catalin Marinas Cc: Will Deacon --- arch/arm64/Kconfig | 1 + arch/arm64/mm/init.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b1201d25a8a4..af42871431c0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -120,6 +120,7 @@ config ARM64 select CRC32 select DCACHE_WORD_ACCESS select DYNAMIC_FTRACE if FUNCTION_TRACER + select DMA_BOUNCE_UNALIGNED_KMALLOC select DMA_DIRECT_REMAP select EDAC_SUPPORT select FRAME_POINTER diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 66e70ca47680..3ac2e9d79ce4 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -442,7 +442,12 @@ void __init bootmem_init(void) */ void __init mem_init(void) { - swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE); + bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit); + + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC)) + swiotlb = true; + + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); /* this will put all unused low memory onto the freelists */ memblock_free_all();