From patchwork Wed May 29 17:12:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32768C25B7C for ; Wed, 29 May 2024 17:13:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AxUrDSnWIn17Rma0Idqi6p5myy5BWZMbha9SIlpBba4=; b=xRJ6LJ8DrFWMcK yps6H5EdU/mHT6DdHtYFvP92sV1F5TjxUrR1WK8UVJdnrwb/LH0ED7uHyyB92TLngL0PakE3pyL+/ y/MS9rFeRFi5U3mgN3xJGVSqxRfxf5f5kIdKzU2Ew5xLogupxVjWD9QTY5b1HoVRmZHRDuZH16Syu THncUN0EgrG7AKCbC7G7D8jf8j9CtLAev8okEDIZHxH04ILoFrRKOdiEWnQ6gSnWioORpkhXDCB1H ImH08Y7/cHb9668jfwsKS+2Xf9EYoAP25sIUXI0dmTQwX7PU/vm8zbuIn9XGEcULfH3F59SMR7K7i XOo7WD0sMTwvZe4xBCtg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMrQ-000000051qW-1jOm; Wed, 29 May 2024 17:13:12 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMrN-000000051ol-18NK for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:13:10 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqG7T0pnWz6K6Mx; Thu, 30 May 2024 01:08:53 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 5E7B71400CA; Thu, 30 May 2024 01:13:07 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:13:06 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Date: Wed, 29 May 2024 18:12:29 +0100 Message-ID: <20240529171236.32002-2-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101309_656651_F39D60C4 X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Dan Williams Based heavily on Dan William's earlier attempt to introduce this infrastruture for all architectures so I've kept his authorship. [1] arm64 stores it's numa data in memblock. Add a memblock generic way to interrogate that data for memory_Add_physaddr_to_nid. Cc: Mike Rapoport Cc: Jia He Cc: Will Deacon Cc: David Hildenbrand Cc: Andrew Morton Signed-off-by: Dan Williams Link: https://lore.kernel.org/r/159457120334.754248.12908401960465408733.stgit@dwillia2-desk3.amr.corp.intel.com [1] Signed-off-by: Jonathan Cameron --- arch/arm64/include/asm/sparsemem.h | 4 ++++ arch/arm64/mm/init.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 8a8acc220371..8dd1b6a718fa 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -26,4 +26,8 @@ #define SECTION_SIZE_BITS 27 #endif /* CONFIG_ARM64_64K_PAGES */ +#ifndef __ASSEMBLY__ +extern int memory_add_physaddr_to_nid(u64 addr); +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +#endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9b5ab6818f7f..f310cbd349ba 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -48,6 +48,35 @@ #include #include +#ifdef CONFIG_NUMA + +static int __memory_add_physaddr_to_nid(u64 addr) +{ + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr); + int nid; + + for_each_online_node(nid) { + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (pfn >= start_pfn && pfn <= end_pfn) + return nid; + } + return NUMA_NO_NODE; +} + +int memory_add_physaddr_to_nid(u64 start) +{ + int nid = __memory_add_physaddr_to_nid(start); + + /* Default to node0 as not all callers are prepared for this to fail */ + if (nid == NUMA_NO_NODE) + return 0; + + return nid; +} +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +#endif /* CONFIG_NUMA */ + /* * We need to be able to catch inadvertent references to memstart_addr * that occur (potentially in generic code) before arm64_memblock_init() From patchwork Wed May 29 17:12:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36B39C25B75 for ; Wed, 29 May 2024 17:13:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7thZj+mDNgeLk7Vsa21lcTUPeG4xSgxdHHcAwv+08KI=; b=mlUDeAsRyaXscU 6A67X8edFKQOpyn2i21CimQmvQMnIGCi4iNi/rLBdDndx+FEBvHTzYDbS9ry7md4scuArqAvb4kwQ B/MJ4FiJhS41bYjbAswQzXaJVMKgRSs4Hmie5e1eh9ghqQpD3o3eEawUOU8F54tQ22NAgYQAlnZfm 5E6XQbe0KUS6s7tMSrmfBlKadZvtk/0qu/ISr+vGdQgJhzshOhIW4Knl46LQLcWpONGdn3EirkG0Q V9NpcSdmLK+VRL3wVquXuHggq8gBfekTU5xN8ke5iwsEE7zUn5UiBTLaKhCWrhgHG86WEuXHLOG1n +sXYnTwI31XTRUzT1APw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMrv-000000051xW-1a6C; Wed, 29 May 2024 17:13:43 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMrs-000000051wu-0VYL for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:13:41 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGCn1gH8z6K9Jp; Thu, 30 May 2024 01:12:37 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 2C4BF140CB9; Thu, 30 May 2024 01:13:38 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:13:37 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Date: Wed, 29 May 2024 18:12:30 +0100 Message-ID: <20240529171236.32002-3-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101340_676070_863D7756 X-CRM114-Status: GOOD ( 22.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Dan Williams Similar to how generic memory_add_physaddr_to_nid() interrogates memblock data for numa information, introduce get_reserved_pfn_range_from_nid() to enable the same operation for reserved memory ranges. Example memory ranges that are reserved, but still have associated numa-info are persistent memory or Soft Reserved (EFI_MEMORY_SP) memory. This is Dan's patch but with the implementation of phys_addr_to_target_node() made arm64 specific. Cc: Mike Rapoport Cc: Jia He Cc: Will Deacon Cc: David Hildenbrand Cc: Andrew Morton Signed-off-by: Dan Williams Link: https://lore.kernel.org/r/159457120893.754248.7783260004248722175.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Jonathan Cameron --- arch/arm64/include/asm/sparsemem.h | 4 ++++ arch/arm64/mm/init.c | 22 ++++++++++++++++++++++ include/linux/memblock.h | 8 ++++++++ include/linux/mm.h | 14 ++++++++++++++ mm/memblock.c | 22 +++++++++++++++++++--- mm/mm_init.c | 29 ++++++++++++++++++++++++++++- 6 files changed, 95 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 8dd1b6a718fa..5b483ad6d501 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -27,7 +27,11 @@ #endif /* CONFIG_ARM64_64K_PAGES */ #ifndef __ASSEMBLY__ + extern int memory_add_physaddr_to_nid(u64 addr); #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +extern int phys_to_target_node(phys_addr_t start); +#define phys_to_target_node phys_to_target_node + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index f310cbd349ba..6a2f21b1bb58 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -75,6 +75,28 @@ int memory_add_physaddr_to_nid(u64 start) } EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); +int phys_to_target_node(phys_addr_t start) +{ + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(start); + int nid = __memory_add_physaddr_to_nid(start); + + if (nid != NUMA_NO_NODE) + return nid; + + /* + * Search reserved memory ranges since the memory address does + * not appear to be online + */ + for_each_node_state(nid, N_POSSIBLE) { + get_reserved_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (pfn >= start_pfn && pfn <= end_pfn) + return nid; + } + + return NUMA_NO_NODE; +} +EXPORT_SYMBOL(phys_to_target_node); + #endif /* CONFIG_NUMA */ /* diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e2082240586d..c7d518a54359 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -281,6 +281,10 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, unsigned long *out_end_pfn, int *out_nid); +void __next_reserved_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid); + /** * for_each_mem_pfn_range - early memory pfn range iterator * @i: an integer used as loop variable @@ -295,6 +299,10 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \ i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) +#define for_each_reserved_pfn_range(i, nid, p_start, p_end, p_nid) \ + for (i = -1, __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid); \ + i >= 0; __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid)) + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, unsigned long *out_spfn, diff --git a/include/linux/mm.h b/include/linux/mm.h index 9849dfda44d4..0c829b2d44fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3245,9 +3245,23 @@ void free_area_init(unsigned long *max_zone_pfn); unsigned long node_map_pfn_alignment(void); extern unsigned long absent_pages_in_range(unsigned long start_pfn, unsigned long end_pfn); + +/* + * Allow archs to opt-in to keeping get_pfn_range_for_nid() available + * after boot. + */ +#ifdef CONFIG_ARCH_KEEP_MEMBLOCK +#define __init_or_memblock +#else +#define __init_or_memblock __init +#endif + extern void get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn); +extern void get_reserved_pfn_range_for_nid(unsigned int nid, + unsigned long *start_pfn, unsigned long *end_pfn); + #ifndef CONFIG_NUMA static inline int early_pfn_to_nid(unsigned long pfn) { diff --git a/mm/memblock.c b/mm/memblock.c index d09136e040d3..5498d5ea70b4 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1289,11 +1289,11 @@ void __init_memblock __next_mem_range_rev(u64 *idx, int nid, /* * Common iterator interface used to define for_each_mem_pfn_range(). */ -void __init_memblock __next_mem_pfn_range(int *idx, int nid, +static void __init_memblock __next_memblock_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, - unsigned long *out_end_pfn, int *out_nid) + unsigned long *out_end_pfn, int *out_nid, + struct memblock_type *type) { - struct memblock_type *type = &memblock.memory; struct memblock_region *r; int r_nid; @@ -1319,6 +1319,22 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, *out_nid = r_nid; } +void __init_memblock __next_mem_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid) +{ + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid, + &memblock.memory); +} + +void __init_memblock __next_reserved_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid) +{ + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid, + &memblock.reserved); +} + /** * memblock_set_node - set node ID on memblock regions * @base: base of area to set node ID for diff --git a/mm/mm_init.c b/mm/mm_init.c index f72b852bd5b8..1f6e29e60673 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1644,7 +1644,7 @@ static inline void alloc_node_mem_map(struct pglist_data *pgdat) { } * provided by memblock_set_node(). If called for a node * with no available memory, the start and end PFNs will be 0. */ -void __init get_pfn_range_for_nid(unsigned int nid, +void __init_or_memblock get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn) { unsigned long this_start_pfn, this_end_pfn; @@ -1662,6 +1662,33 @@ void __init get_pfn_range_for_nid(unsigned int nid, *start_pfn = 0; } +/** + * get_reserved_pfn_range_for_nid - Return the start and end page frames for a node + * @nid: The nid to return the range for. If MAX_NUMNODES, the min and max PFN are returned. + * @start_pfn: Passed by reference. On return, it will have the node start_pfn. + * @end_pfn: Passed by reference. On return, it will have the node end_pfn. + * + * Mostly identical to get_pfn_range_for_nid() except it operates on + * reserved ranges rather than online memory. + */ +void __init_or_memblock get_reserved_pfn_range_for_nid(unsigned int nid, + unsigned long *start_pfn, unsigned long *end_pfn) +{ + unsigned long this_start_pfn, this_end_pfn; + int i; + + *start_pfn = -1UL; + *end_pfn = 0; + + for_each_reserved_pfn_range(i, nid, &this_start_pfn, &this_end_pfn, NULL) { + *start_pfn = min(*start_pfn, this_start_pfn); + *end_pfn = max(*end_pfn, this_end_pfn); + } + + if (*start_pfn == -1UL) + *start_pfn = 0; +} + static void __init free_area_init_node(int nid) { pg_data_t *pgdat = NODE_DATA(nid); From patchwork Wed May 29 17:12:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F927C25B7C for ; Wed, 29 May 2024 17:14:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vec8ml1fbgJC3Ff85krCbJitZaJloZ3Mco2WBTIzN0g=; b=4d138Rj5amswrf 8G0nxSDkdrEgbsUqNRbpk+mmh45tuEhqyfLMOADMiSoOzmwpdRs86XhgfToxvssnz594R8rVvoXmD eM72sFQeAUFVTsvrKeduVrW2/WSm+TEvkO5GpGgbc8mvS9uumcQ6Jv8ffFO8d33UggOLx6NLg4XmG Xfh/wPAqJSpvK+6ZCNrfvwZ0M1nL0DuThsykxNLafUNfQPA80lmlYHljpnQHfQVsy5JLnxgyJ0+hJ hsnRWYoZeblkgYu/tsObdKnbXbMDfhsuiJnOq+SzaiTbBHgFx80Pg3r65e1aC7VZg9V3KfXmZ4eW1 QjTpIrokTDSqnO/ezmtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMsQ-0000000523o-0K1B; Wed, 29 May 2024 17:14:14 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMsM-00000005234-3iwb for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:14:13 +0000 Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqG8f4kdnz689rg; Thu, 30 May 2024 01:09:54 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id E523114038F; Thu, 30 May 2024 01:14:08 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:14:08 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Date: Wed, 29 May 2024 18:12:31 +0100 Message-ID: <20240529171236.32002-4-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101411_824007_06E2E1C0 X-CRM114-Status: GOOD ( 10.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For CXL CFMWS regions, need to add memblocks that may not be in the system memory map so that their nid can be queried later. Add a function to make this easy to do. Signed-off-by: Jonathan Cameron --- include/linux/memblock.h | 2 ++ mm/memblock.c | 11 +++++++++++ 2 files changed, 13 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index c7d518a54359..9ac1ed8c3293 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -113,6 +113,8 @@ static inline void memblock_discard(void) {} void memblock_allow_resize(void); int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid, enum memblock_flags flags); +int memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, int nid, + enum memblock_flags flags); int memblock_add(phys_addr_t base, phys_addr_t size); int memblock_remove(phys_addr_t base, phys_addr_t size); int memblock_phys_free(phys_addr_t base, phys_addr_t size); diff --git a/mm/memblock.c b/mm/memblock.c index 5498d5ea70b4..8d02f75ec186 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -714,6 +714,17 @@ int __init_memblock memblock_add_node(phys_addr_t base, phys_addr_t size, return memblock_add_range(&memblock.memory, base, size, nid, flags); } +int __init_memblock memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, + int nid, enum memblock_flags flags) +{ + phys_addr_t end = base + size - 1; + + memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__, + &base, &end, nid, flags, (void *)_RET_IP_); + + return memblock_add_range(&memblock.reserved, base, size, nid, flags); +} + /** * memblock_add - add new memblock region * @base: base address of the new region From patchwork Wed May 29 17:12:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDC94C25B75 for ; Wed, 29 May 2024 17:14:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CTZRgcvaVYz9IRLDx0N6WDVRR909c7Ce2SWwwomeKTU=; b=GkKwoISK8127Jv h2SFqMl9+cE8hkZplN6CTiWzXBSfKLimx8xh7d3AHPa2OgElZF5aLXN+DjVADQoPt4ldQIWKt1QSE rhN5KVsIJ1MEDpvG5mtMl/h9oZ6+haaNzSLzooT6TVRz2heG4BKSQ4ll5p8aDcao4BD1MpLYkadIS PRqTZfHjNfaiIraj9y9ZGAFqIrmJ8otEi2SDUH4km7ECVwEKyE5LTnUYxljejooawmKO16GkHtYIk l7rUto/27pcRxJec4DvPOO2T30yLAWRB8TRJ7/iSCo2WdSU8y3DSRYLR5Ek8Dc9IqYbtjs0neIKJt 48jmvzKwZ3nnKnp8F8Gw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMsv-000000052AI-0Hko; Wed, 29 May 2024 17:14:45 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMsr-0000000529O-2EWV for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:14:43 +0000 Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGDy5XCQz6K95K; Thu, 30 May 2024 01:13:38 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id B3F3E140B33; Thu, 30 May 2024 01:14:39 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:14:39 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Date: Wed, 29 May 2024 18:12:32 +0100 Message-ID: <20240529171236.32002-5-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101441_878383_93ECA780 X-CRM114-Status: GOOD ( 10.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ACPI can declare NUMA nodes for memory that will be along later. CXL Fixed Memory Windows may also be assigned NUMA nodes that are initially empty. Currently the generic arch_numa handling will online these empty nodes. This is both inconsistent with x86 and with itself as if we add memory and remove it again the node goes away. Signed-off-by: Jonathan Cameron --- drivers/base/arch_numa.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c index 5b59d133b6af..0630efb696ab 100644 --- a/drivers/base/arch_numa.c +++ b/drivers/base/arch_numa.c @@ -363,6 +363,11 @@ static int __init numa_register_nodes(void) unsigned long start_pfn, end_pfn; get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (start_pfn >= end_pfn && + !node_state(nid, N_CPU) && + !node_state(nid, N_GENERIC_INITIATOR)) + continue; + setup_node_data(nid, start_pfn, end_pfn); node_set_online(nid); } From patchwork Wed May 29 17:12:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECB34C25B75 for ; Wed, 29 May 2024 17:15:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xAF1KFG8cpGfX6mUpbrHwkP/qZofeMx0qGluHZDomlI=; b=ovLwxLqGYoPvyt n0M+w7y3cZtCgVs7hsB32Nvd6+z5Vxm+bbXrh3G7RMXBFlPERtvogcXgPKh1nPAfRVhCosbo8FG6V uzim6+to5cQV3rH4juh2Vy3nB5K0bCIgfi79b335teZLs5KHI3hcWkgbgOSlAKsf18+cWKF/wum/T 4d4h9mVfS4uNluJhyN4Ir5836si90MQ2y0+uR/ejkuCCGBvtBUII3cUOoMLvMfZRv5CkCg6kudCNN 2lncb7n9grsPRwdxDa76KGbbcC7NO/JKqmzIYaSPrxINIK4Rs9CYpaG/Ue/hoD5/ikexEXVtezjHL 4EiKdgj3mn8YmDp3allA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMtP-000000052G8-3CLG; Wed, 29 May 2024 17:15:15 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMtM-000000052FE-1Bhi for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:15:14 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGB66sVPz6J9y8; Thu, 30 May 2024 01:11:10 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 8666A1400CA; Thu, 30 May 2024 01:15:10 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:15:10 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Date: Wed, 29 May 2024 18:12:33 +0100 Message-ID: <20240529171236.32002-6-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101512_554880_74F49666 X-CRM114-Status: GOOD ( 11.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Setting the reserved region entries to the appropriate Node ID means that they can be used to establish the node to which we should add hotplugged CXL memory within a CXL fixed memory window. Signed-off-by: Jonathan Cameron --- drivers/base/arch_numa.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c index 0630efb696ab..568dbabeb636 100644 --- a/drivers/base/arch_numa.c +++ b/drivers/base/arch_numa.c @@ -208,6 +208,13 @@ int __init numa_add_memblk(int nid, u64 start, u64 end) start, (end - 1), nid); return ret; } + /* Also set reserved nodes nid */ + ret = memblock_set_node(start, (end - start), &memblock.reserved, nid); + if (ret < 0) { + pr_err("memblock [0x%llx - 0x%llx] failed to add on node %d\n", + start, (end - 1), nid); + return ret; + } node_set(nid, numa_nodes_parsed); return ret; From patchwork Wed May 29 17:12:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09685C25B7C for ; Wed, 29 May 2024 17:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7jb9WYMN1MbpS0MlXTwzvC85gPRreSQJ9g3V47LkuYo=; b=H/WZhPkVu6LonD 2CwvxgpubKwXJr3jC3gBEvVew1bR7YMo0O6EWk6sJR2T6Znu6LfgERYr19jMpQ4PZv15+5sOKGJ0a /QClqxjKack+q0xJA92ZFF1Zy+c5tPfZAYC+LJkQNtbCnZdEF+aObzjD1rhsZfKj49tIUDIoV1eso 3L+oXeEyl3MHVderyAFytY+FO/dU/vysN/qsjLy4esp1jVFV1tSWiBi5JbmRq8+UwTJTaSNZElji8 tDWua/TFs2QUejS9g29VQorhCveqAUsgk7qohsblWnqyChYFT785gK7JlSpgfOOqzkb0zoIts7Cho +5DkGtwEm+LxwINik2ZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMtu-000000052MK-1oHt; Wed, 29 May 2024 17:15:46 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMtr-000000052LQ-0NxO for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:15:45 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGG82hwKz6K9Hp; Thu, 30 May 2024 01:14:40 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 536541400CA; Thu, 30 May 2024 01:15:41 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:15:40 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match. Date: Wed, 29 May 2024 18:12:34 +0100 Message-ID: <20240529171236.32002-7-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101543_504715_EB50A4B5 X-CRM114-Status: GOOD ( 13.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org CXL memory hotplug relies on additional NUMA nodes being created for any CXL Fixed Memory Window if there is no suitable one created by system firmware. To detect if system firmware has created one look for any normal memblock that overlaps with the Fixed Memory Window that has a NUMA node (nid) set. If one is found, add a region with the same nid to memblock.reserved so we can match it later when CXL memory is hotplugged. If not, add a region anyway because a suitable NUMA node will be set later. So for now use NUMA_NO_NODE. Signed-off-by: Jonathan Cameron --- arch/arm64/mm/init.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 6a2f21b1bb58..27941f22db1c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -50,6 +50,32 @@ #ifdef CONFIG_NUMA +/* + * Scan existing memblocks and if this region overlaps with a region with + * a nid set, add a reserved memblock. + */ +int __init numa_fill_memblks(u64 start, u64 end) +{ + struct memblock_region *region; + + for_each_mem_region(region) { + int nid = memblock_get_region_node(region); + + if (nid == NUMA_NO_NODE) + continue; + if (!(end < region->base || start >= region->base + region->size)) { + memblock_add_reserved_node(start, end - start, nid, + MEMBLOCK_RSRV_NOINIT); + return 0; + } + } + + memblock_add_reserved_node(start, end - start, NUMA_NO_NODE, + MEMBLOCK_RSRV_NOINIT); + + return NUMA_NO_MEMBLK; +} + static int __memory_add_physaddr_to_nid(u64 addr) { unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr); From patchwork Wed May 29 17:12:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7E62EC25B75 for ; Wed, 29 May 2024 17:16:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QY1w8ycJ44dF1CnM0qVr+EmbOu72o7wzjHM8A9H05Bg=; b=BqQB6aVEltW/7C Pc660+MML2GegOTZMFfMEULH0drENvDpLGQKHPT56+m3MdEoJnQ3DSFrBjat5NA2K4XUnIXwaDN3d mvk0+rTN7Di7bv9ZMpnyqyNuCwgH1MK1mHVI/xGR3ARVpLUvY4FoabVNZEtZWC3jUL5u2EnXOchcP ZAv20ERZ1O8VWAgfG/gct+5ivdbkjUIzGTf5OuxNY+YDcb/EHTRRUECx+NGOoqljD3ooTh+MKFLOU iF6RD4IphGKsMilCULiqJfHy6IVZTehoVzxSECQq3SLwz9rGtHZgMGmMJmQUq3RBLE5N+g1VY5fsV EvQIKh4PBg87aZLlg4TA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMuT-000000052Xt-1ZQD; Wed, 29 May 2024 17:16:21 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMuQ-000000052Vs-11eh for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:16:19 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGGl2clRz67l0C; Thu, 30 May 2024 01:15:11 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 4EFAF140447; Thu, 30 May 2024 01:16:12 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:16:11 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows. Date: Wed, 29 May 2024 18:12:35 +0100 Message-ID: <20240529171236.32002-8-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101618_518891_893A291E X-CRM114-Status: GOOD ( 11.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org One reported platform uses this nonsensical entry to represent a disable CFWMS. The acpi_cxl driver already correctly errors out on seeing this, but that leaves an additional confusing node in /sys/devices/system/nodes/possible plus wastes some space. Signed-off-by: Jonathan Cameron --- drivers/acpi/numa/srat.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c index e3f26e71637a..28c963d5c51f 100644 --- a/drivers/acpi/numa/srat.c +++ b/drivers/acpi/numa/srat.c @@ -329,6 +329,11 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header, int node; cfmws = (struct acpi_cedt_cfmws *)header; + + /* At least one firmware reports disabled entries with size 0 */ + if (cfmws->window_size == 0) + return 0; + start = cfmws->base_hpa; end = cfmws->base_hpa + cfmws->window_size; From patchwork Wed May 29 17:12:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E496C25B75 for ; Wed, 29 May 2024 17:17:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/AlSPXwxgJ5qjJ8cHNgm0KM440M7zd9n2/vqXegk9qk=; b=wMGMJ4+9fDuZsZ xGKSRVTNVshBY5zCRhVZV36oayquNCTzJ5Uo9pXEFDtDe8H2rAgMCI7ymzZS+KGX18D5ywmbvZPot ljfvHFViYS/UXnJwdENQcXhNQ5TVBVh7a0XuZ/7EXIoFGB20VxP0tKp9w7GMT4L4oQOVzHCug5oAM UZprP5BRpk4tZKe5MuDuc/wPY10hc5YQ+mUnCC7MaFWw7FO1WOxcvoinQpB5IiROMeAYecHuoMKyM 4tT+l4OtiXBlPJTQ0g12pEBwFeHMhZlX9Jk7fvlYi4fulN9EWeaSNUw5tNMUaeTdM8CAsqJ8J+rAv yEPiRZ33DUitusuXm82Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMuv-000000052hG-3Atc; Wed, 29 May 2024 17:16:49 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCMur-000000052fm-1lsu for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 17:16:48 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGCv4KMlz6JBK9; Thu, 30 May 2024 01:12:43 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 30308140D96; Thu, 30 May 2024 01:16:43 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:16:42 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Date: Wed, 29 May 2024 18:12:36 +0100 Message-ID: <20240529171236.32002-9-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_101645_687425_6E0EE15C X-CRM114-Status: GOOD ( 15.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org I'm not sure what this is balancing, but it if is necessary then the reserved memblock approach can't be used to stash NUMA node assignments as after the first add / remove cycle the entry is dropped so not available if memory is re-added at the same HPA. This patch is here to hopefully spur comments on what this is there for! Signed-off-by: Jonathan Cameron --- mm/memory_hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 431b1f6753c0..3d8dd4749dfc 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size) } if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) { - memblock_phys_free(start, size); + // memblock_phys_free(start, size); memblock_remove(start, size); }