From patchwork Wed Aug 7 06:41:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13755898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FF57C52D6F for ; Wed, 7 Aug 2024 06:57:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=k7VDCmXW7Kr1exjoseRAaeINuIcyI+uzQQUxRYB1GeU=; b=2m+MKiYfm1rHjY QlF5iZMUXrkcEHzkc94E3vrDjyMVUIdvnlLfCkFcu7Vnsc0lwEbQZmiC9rW3uT2+VIifSUF/BFr6p Tp6Szde3QET2oRdTOxBN5d2FPjGs7+NqQeoJk7JRX44k7xTgX+Ss3Pn7x3nF3Viw/3ih8gi3V41U/ On9L7zZVjs69jif94KkDWkfCWG3+82+0VDEG6zI+j4dlGm1LRLfkQPxGhH4caggsIrSdV5ovr8n7S GwzXAJkk5A8KRVCC3pBx08B+7V0xPRjL2j0oZ2XO5Jhj+3udPz1VvG8HHmL5qIJAuV5L28FpfQFgn BvHpIG7toKQkjkfAyh3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sbabX-0000000483z-2sEK; Wed, 07 Aug 2024 06:57:03 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sbaRL-000000043Hx-2wsK; Wed, 07 Aug 2024 06:46:33 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E91A261175; Wed, 7 Aug 2024 06:46:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4125CC4AF0D; Wed, 7 Aug 2024 06:46:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723013190; bh=lYNswY1STPbQO7nDcEXNTopr5rUcSm9CaKEeKoSuat0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QAThqQzvBHaExUV/N+vjzo1nvKC9jasKkz2r/I7c4XQPhvVMx5sx6XyL0/u6+LmDY aptVm6tt0UvqSvcaERKy6OXuhtmpjFeZW8thz/QCbTTslMIpPalHYA9T3L0BzOOEei qEq5FHv876P0tW17E8xgQvwBUvDIc74AMqQCM3FbubqXIJOroX2/KPt2O+Untmo+9r PeYul4BpovXvAe7ZZKKFteLHKhlMX1kChI5KJGJJje0nG0yEA5z4cKgylXuC3QBBJq zeIY1b0oDAcb9eHusYrHKaCtBXrOTqrW2rqCS3ZkgTuREM4Gzj7VPeF4plbIVLT6T6 UuJJqFdJYXP3A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , Andreas Larsson , Andrew Morton , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Davidlohr Bueso , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , John Paul Adrian Glaubitz , Jonathan Cameron , Jonathan Corbet , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , Samuel Holland , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Will Deacon , Zi Yan , devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, nvdimm@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org, Jonathan Cameron Subject: [PATCH v4 25/26] mm: make range-to-target_node lookup facility a part of numa_memblks Date: Wed, 7 Aug 2024 09:41:09 +0300 Message-ID: <20240807064110.1003856-26-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807064110.1003856-1-rppt@kernel.org> References: <20240807064110.1003856-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240806_234631_923026_7110E7F0 X-CRM114-Status: GOOD ( 20.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" The x86 implementation of range-to-target_node lookup (i.e. phys_to_target_node() and memory_add_physaddr_to_nid()) relies on numa_memblks. Since numa_memblks are now part of the generic code, move these functions from x86 to mm/numa_memblks.c and select CONFIG_NUMA_KEEP_MEMINFO when CONFIG_NUMA_MEMBLKS=y for dax and cxl. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Jonathan Cameron Tested-by: Zi Yan # for x86_64 and arm64 Tested-by: Jonathan Cameron [arm64 + CXL via QEMU] Reviewed-by: Dan Williams Acked-by: David Hildenbrand --- arch/x86/include/asm/sparsemem.h | 9 -------- arch/x86/mm/numa.c | 38 -------------------------------- drivers/cxl/Kconfig | 2 +- drivers/dax/Kconfig | 2 +- include/linux/numa_memblks.h | 7 ++++++ mm/numa.c | 1 + mm/numa_memblks.c | 38 ++++++++++++++++++++++++++++++++ 7 files changed, 48 insertions(+), 49 deletions(-) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 64df897c0ee3..3918c7a434f5 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -31,13 +31,4 @@ #endif /* CONFIG_SPARSEMEM */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_NUMA_KEEP_MEMINFO -extern int phys_to_target_node(phys_addr_t start); -#define phys_to_target_node phys_to_target_node -extern int memory_add_physaddr_to_nid(u64 start); -#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid -#endif -#endif /* __ASSEMBLY__ */ - #endif /* _ASM_X86_SPARSEMEM_H */ diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index d23287611449..64e5cdb2460a 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -453,41 +453,3 @@ u64 __init numa_emu_dma_end(void) return PFN_PHYS(MAX_DMA32_PFN); } #endif /* CONFIG_NUMA_EMU */ - -#ifdef CONFIG_NUMA_KEEP_MEMINFO -static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) -{ - int i; - - for (i = 0; i < mi->nr_blks; i++) - if (mi->blk[i].start <= start && mi->blk[i].end > start) - return mi->blk[i].nid; - return NUMA_NO_NODE; -} - -int phys_to_target_node(phys_addr_t start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - /* - * Prefer online nodes, but if reserved memory might be - * hot-added continue the search with reserved ranges. - */ - if (nid != NUMA_NO_NODE) - return nid; - - return meminfo_to_nid(&numa_reserved_meminfo, start); -} -EXPORT_SYMBOL_GPL(phys_to_target_node); - -int memory_add_physaddr_to_nid(u64 start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - if (nid == NUMA_NO_NODE) - nid = numa_meminfo.blk[0].nid; - return nid; -} -EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); - -#endif diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 99b5c25be079..29c192f20082 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -6,7 +6,7 @@ menuconfig CXL_BUS select FW_UPLOAD select PCI_DOE select FIRMWARE_TABLE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS help CXL is a bus that is electrically compatible with PCI Express, but layers three protocols on that signalling (CXL.io, CXL.cache, and diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig index a88744244149..d656e4c0eb84 100644 --- a/drivers/dax/Kconfig +++ b/drivers/dax/Kconfig @@ -30,7 +30,7 @@ config DEV_DAX_PMEM config DEV_DAX_HMEM tristate "HMEM DAX: direct access to 'specific purpose' memory" depends on EFI_SOFT_RESERVE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS default DEV_DAX help EFI 2.8 platforms, and others, may advertise 'specific purpose' diff --git a/include/linux/numa_memblks.h b/include/linux/numa_memblks.h index 5c6e12ad0b7a..17d4bcc34091 100644 --- a/include/linux/numa_memblks.h +++ b/include/linux/numa_memblks.h @@ -46,6 +46,13 @@ static inline int numa_emu_cmdline(char *str) } #endif /* CONFIG_NUMA_EMU */ +#ifdef CONFIG_NUMA_KEEP_MEMINFO +extern int phys_to_target_node(phys_addr_t start); +#define phys_to_target_node phys_to_target_node +extern int memory_add_physaddr_to_nid(u64 start); +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +#endif /* CONFIG_NUMA_KEEP_MEMINFO */ + #endif /* CONFIG_NUMA_MEMBLKS */ #endif /* __NUMA_MEMBLKS_H */ diff --git a/mm/numa.c b/mm/numa.c index 1f1582dcdf4a..e2eec07707d1 100644 --- a/mm/numa.c +++ b/mm/numa.c @@ -3,6 +3,7 @@ #include #include #include +#include struct pglist_data *node_data[MAX_NUMNODES]; EXPORT_SYMBOL(node_data); diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index c4037faa438b..a28507cf1e7f 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -531,3 +531,41 @@ int __init numa_fill_memblks(u64 start, u64 end) } return 0; } + +#ifdef CONFIG_NUMA_KEEP_MEMINFO +static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) +{ + int i; + + for (i = 0; i < mi->nr_blks; i++) + if (mi->blk[i].start <= start && mi->blk[i].end > start) + return mi->blk[i].nid; + return NUMA_NO_NODE; +} + +int phys_to_target_node(phys_addr_t start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + /* + * Prefer online nodes, but if reserved memory might be + * hot-added continue the search with reserved ranges. + */ + if (nid != NUMA_NO_NODE) + return nid; + + return meminfo_to_nid(&numa_reserved_meminfo, start); +} +EXPORT_SYMBOL_GPL(phys_to_target_node); + +int memory_add_physaddr_to_nid(u64 start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + if (nid == NUMA_NO_NODE) + nid = numa_meminfo.blk[0].nid; + return nid; +} +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +#endif /* CONFIG_NUMA_KEEP_MEMINFO */