From patchwork Thu Apr 20 11:25:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Henry Wang X-Patchwork-Id: 13218532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BB9C2C77B7C for ; Thu, 20 Apr 2023 11:26:06 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.524032.814578 (Exim 4.92) (envelope-from ) id 1ppSQ2-0006X0-SI; Thu, 20 Apr 2023 11:25:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 524032.814578; Thu, 20 Apr 2023 11:25:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppSQ2-0006Wt-O8; Thu, 20 Apr 2023 11:25:42 +0000 Received: by outflank-mailman (input) for mailman id 524032; Thu, 20 Apr 2023 11:25:41 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ppSQ1-0006Vv-6M for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:41 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 14432ac5-df6e-11ed-b21f-6b7b168915f2; Thu, 20 Apr 2023 13:25:40 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 339771516; Thu, 20 Apr 2023 04:26:23 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D9E013F587; Thu, 20 Apr 2023 04:25:35 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14432ac5-df6e-11ed-b21f-6b7b168915f2 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , Henry Wang Subject: [PATCH v3 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS Date: Thu, 20 Apr 2023 19:25:05 +0800 Message-Id: <20230420112521.3272732-2-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com> References: <20230420112521.3272732-1-Henry.Wang@arm.com> MIME-Version: 1.0 From: Wei Chen As a memory range described in device tree cannot be split across multiple nodes. And it is very likely than if you have more than 64 nodes, you may need a lot more than 2 regions per node. So the default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense on Arm. So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This avoid to have different way to define the value based NUMA vs non-NUMA. Further discussions can be found here[1]. [1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html Signed-off-by: Wei Chen Signed-off-by: Henry Wang Acked-by: Jan Beulich --- By checking the discussion in [1] and [2] [1] https://lists.xenproject.org/archives/html/xen-devel/2023-01/msg00595.html [2] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html v2 -> v3: 1. No change v1 -> v2: 1. Add code comments to explain using NR_MEM_BANKS for Arm 2. Refine commit messages. --- xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++- xen/include/xen/numa.h | 9 +++++++++ 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h index e2bee2bd82..7d6ae36a19 100644 --- a/xen/arch/arm/include/asm/numa.h +++ b/xen/arch/arm/include/asm/numa.h @@ -3,9 +3,26 @@ #include +#include + typedef u8 nodeid_t; -#ifndef CONFIG_NUMA +#ifdef CONFIG_NUMA + +/* + * It is very likely that if you have more than 64 nodes, you may + * need a lot more than 2 regions per node. So, for Arm, we would + * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS. + * And in the future NR_MEM_BANKS will be bumped for new platforms, + * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to + * have different way to define the value based NUMA vs non-NUMA. + * + * Further discussions can be found here: + * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html + */ +#define NR_NODE_MEMBLKS NR_MEM_BANKS + +#else /* Fake one node for now. See also node_online_map. */ #define cpu_to_node(cpu) 0 diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h index 29b8c2df89..b86d0851fc 100644 --- a/xen/include/xen/numa.h +++ b/xen/include/xen/numa.h @@ -13,7 +13,16 @@ #define MAX_NUMNODES 1 #endif +/* + * Some architectures may have different considerations for + * number of node memory blocks. They can define their + * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural + * implementation. If the arch does not have specific implementation, + * the following default NR_NODE_MEMBLKS will be used. + */ +#ifndef NR_NODE_MEMBLKS #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2) +#endif #define vcpu_to_node(v) (cpu_to_node((v)->processor))