From patchwork Wed Jun 22 08:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 12890306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4F77CCA47E for ; Wed, 22 Jun 2022 08:34:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1221F8E0092; Wed, 22 Jun 2022 04:34:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D2388E008A; Wed, 22 Jun 2022 04:34:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E65F78E0092; Wed, 22 Jun 2022 04:34:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D52228E008A for ; Wed, 22 Jun 2022 04:34:24 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 6B20480706 for ; Wed, 22 Jun 2022 08:26:38 +0000 (UTC) X-FDA: 79605190476.13.C223573 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf22.hostedemail.com (Postfix) with ESMTP id E0D3BC00BD for ; Wed, 22 Jun 2022 08:26:33 +0000 (UTC) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25M8O6Yd011338; Wed, 22 Jun 2022 08:26:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=GUM9IUyra2xqLNoWXRxcJA+XK8OOsj1AeyAJRnC3H24=; b=tSZO0/MXAM3e9E26PEXOmAIfj2SPvjigRIYfrxTqbInRGfKdHr5hXsx/4fjs+6esfFHh 3SoRFnIN93rv6XlK8rT33oJtAFUIbWhoPKo8ycIR5IZlksS8vrs8U5aB3ohQ4FMt5GtO S/u1jh1/pTVNFXqbz5fTXAFVsCAda8VYK+KbRxLD9dSJJmrZLq8urIeP0AtrdbkQvRgN gJq41emV2otnYW3vBNj+uqVgCLqI4KH1ZOX151GhNUTJAIjaaWVLZ6sV8FqXdUkCr24T dP6yBt9+V1dZQMRClKrEEyDmwPm6pgmzAFTJ1YjyraVhNfG0iIFu2PbFUWwmOhOVFIHY DA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3guyhm0129-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:24 +0000 Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 25M8QNET024825; Wed, 22 Jun 2022 08:26:23 GMT Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3guyhm0121-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:23 +0000 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 25M8Jx0i003295; Wed, 22 Jun 2022 08:26:22 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma02dal.us.ibm.com with ESMTP id 3gt0098ra7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:22 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 25M8QLoP32112958 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 22 Jun 2022 08:26:21 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8D60D6E04E; Wed, 22 Jun 2022 08:26:21 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5D3636E056; Wed, 22 Jun 2022 08:26:16 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.69.226]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 22 Jun 2022 08:26:15 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Wei Xu , Huang Ying , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , "Aneesh Kumar K.V" Subject: [PATCH v7 08/12] mm/demotion: Add pg_data_t member to track node memory tier details Date: Wed, 22 Jun 2022 13:55:09 +0530 Message-Id: <20220622082513.467538-9-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220622082513.467538-1-aneesh.kumar@linux.ibm.com> References: <20220622082513.467538-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: QAiM8DYuD_JA0Ci7YHG_07JMsJfFCyu2 X-Proofpoint-ORIG-GUID: 2j7BOf9Y61w8SnE13wd3gpJ-4xYXlAnX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-21_11,2022-06-21_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 lowpriorityscore=0 phishscore=0 adultscore=0 bulkscore=0 impostorscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 suspectscore=0 malwarescore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000 definitions=main-2206220039 ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="tSZO0/MX"; dmarc=pass (policy=none) header.from=ibm.com; spf=temperror (imf22.hostedemail.com: error in processing during lookup of aneesh.kumar@linux.ibm.com: DNS error) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655886394; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GUM9IUyra2xqLNoWXRxcJA+XK8OOsj1AeyAJRnC3H24=; b=ZMC3NtRHnWm755oPqb2LwDcbHlrrCNDFsstnBcUXznTFEh05Ctm+PXWVqnNKDgq8PNxWp9 phc7m4r31+Cz1036RNhow3NUzVMSfkzEqfE4gWYKIQu00K2QuMXQi0akn9Vjhupui1W7P2 EDqkOm0e6Q0FHm34qiTyFEGfpdT2mhE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655886394; a=rsa-sha256; cv=none; b=p85ZNc5lnBXp171l71WePfsZBLYcB3TTKEuBzUqmiVHOogmkge91W4q188amiWNg35kylp Pnyw7mYUFRxeqEnmR32muORcXTxoeQsfffbdxRrsy8jeM51dwuK9uUVj71T112OwFeYqSt D3Zeey0gcG/bBTNM2V/KIepF3PfiXm0= X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: E0D3BC00BD X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="tSZO0/MX"; dmarc=pass (policy=none) header.from=ibm.com; spf=temperror (imf22.hostedemail.com: error in processing during lookup of aneesh.kumar@linux.ibm.com: DNS error) smtp.mailfrom=aneesh.kumar@linux.ibm.com X-Stat-Signature: utx7gurywaubtnf1yche9hnoh5dtyx1g X-HE-Tag: 1655886393-69832 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also update different helpes to use NODE_DATA()->memtier. Since node specific memtier can change based on the reassignment of NUMA node to a different memory tiers, accessing NODE_DATA()->memtier needs to happen under an rcu read lock or memory_tier_lock. Signed-off-by: Aneesh Kumar K.V Reported-by: kernel test robot --- include/linux/memory-tiers.h | 11 ++++ include/linux/mmzone.h | 3 + mm/memory-tiers.c | 104 +++++++++++++++++++++++++---------- 3 files changed, 89 insertions(+), 29 deletions(-) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 453f6e5d357c..705b63ee31d5 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -6,6 +6,9 @@ #ifdef CONFIG_NUMA +#include +#include + #define MEMORY_TIER_HBM_GPU 300 #define MEMORY_TIER_DRAM 200 #define MEMORY_TIER_PMEM 100 @@ -13,6 +16,12 @@ #define DEFAULT_MEMORY_TIER MEMORY_TIER_DRAM #define MAX_MEMORY_TIER_ID 400 +struct memory_tier { + struct list_head list; + struct device dev; + nodemask_t nodelist; +}; + extern bool numa_demotion_enabled; int node_create_and_set_memory_tier(int node, int tier); #ifdef CONFIG_MIGRATION @@ -25,6 +34,8 @@ static inline int next_demotion_node(int node) #endif int node_get_memory_tier_id(int node); int node_update_memory_tier(int node, int tier); +struct memory_tier *node_get_memory_tier(int node); +void node_put_memory_tier(struct memory_tier *memtier); #else diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aab70355d64f..1f846cb723fd 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -928,6 +928,9 @@ typedef struct pglist_data { /* Per-node vmstats */ struct per_cpu_nodestat __percpu *per_cpu_nodestats; atomic_long_t vm_stat[NR_VM_NODE_STAT_ITEMS]; +#ifdef CONFIG_NUMA + struct memory_tier *memtier; +#endif } pg_data_t; #define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages) diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index b7cb368cb9c0..6a2476faf13a 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -1,22 +1,15 @@ // SPDX-License-Identifier: GPL-2.0 #include -#include -#include #include #include #include #include #include +#include #include #include "internal.h" -struct memory_tier { - struct list_head list; - struct device dev; - nodemask_t nodelist; -}; - struct demotion_nodes { nodemask_t preferred; }; @@ -120,7 +113,7 @@ static void memory_tier_device_release(struct device *dev) { struct memory_tier *tier = to_memory_tier(dev); - kfree(tier); + kfree_rcu(tier); } static void insert_memory_tier(struct memory_tier *memtier) @@ -176,13 +169,18 @@ static void unregister_memory_tier(struct memory_tier *memtier) static struct memory_tier *__node_get_memory_tier(int node) { - struct memory_tier *memtier; + pg_data_t *pgdat; - list_for_each_entry(memtier, &memory_tiers, list) { - if (node_isset(node, memtier->nodelist)) - return memtier; - } - return NULL; + pgdat = NODE_DATA(node); + if (!pgdat) + return NULL; + /* + * Since we hold memory_tier_lock, we can avoid + * RCU read locks when accessing the details. No + * parallel updates are possible here. + */ + return rcu_dereference_check(pgdat->memtier, + lockdep_is_held(&memory_tier_lock)); } static struct memory_tier *__get_memory_tier_from_id(int id) @@ -196,6 +194,33 @@ static struct memory_tier *__get_memory_tier_from_id(int id) return NULL; } +/* + * Called with memory_tier_lock. Hence the device references cannot + * be dropped during this function. + */ +static void memtier_node_set(int node, struct memory_tier *memtier) +{ + pg_data_t *pgdat; + struct memory_tier *current_memtier; + + pgdat = NODE_DATA(node); + if (!pgdat) + return; + /* + * Make sure we mark the memtier NULL before we assign the new memory tier + * to the NUMA node. This make sure that anybody looking at NODE_DATA + * finds a NULL memtier or the one which is still valid. + */ + current_memtier = rcu_dereference_check(pgdat->memtier, + lockdep_is_held(&memory_tier_lock)); + rcu_assign_pointer(pgdat->memtier, NULL); + if (current_memtier) + node_clear(node, current_memtier->nodelist); + synchronize_rcu(); + node_set(node, memtier->nodelist); + rcu_assign_pointer(pgdat->memtier, memtier); +} + static int __node_create_and_set_memory_tier(int node, int tier) { int ret = 0; @@ -209,7 +234,7 @@ static int __node_create_and_set_memory_tier(int node, int tier) goto out; } } - node_set(node, memtier->nodelist); + memtier_node_set(node, memtier); out: return ret; } @@ -231,14 +256,7 @@ int node_create_and_set_memory_tier(int node, int tier) if (current_tier->dev.id == tier) goto out; - node_clear(node, current_tier->nodelist); - ret = __node_create_and_set_memory_tier(node, tier); - if (ret) { - /* reset it back to older tier */ - node_set(node, current_tier->nodelist); - goto out; - } if (nodes_empty(current_tier->nodelist)) unregister_memory_tier(current_tier); @@ -260,7 +278,7 @@ static int __node_set_memory_tier(int node, int tier) ret = -EINVAL; goto out; } - node_set(node, memtier->nodelist); + memtier_node_set(node, memtier); out: return ret; } @@ -316,10 +334,7 @@ int node_update_memory_tier(int node, int tier) if (!current_tier || current_tier->dev.id == tier) goto out; - node_clear(node, current_tier->nodelist); - ret = __node_create_and_set_memory_tier(node, tier); - if (nodes_empty(current_tier->nodelist)) unregister_memory_tier(current_tier); @@ -330,6 +345,34 @@ int node_update_memory_tier(int node, int tier) return ret; } +/* + * lockless access to memory tier of a NUMA node. + */ +struct memory_tier *node_get_memory_tier(int node) +{ + pg_data_t *pgdat; + struct memory_tier *memtier; + + pgdat = NODE_DATA(node); + if (!pgdat) + return NULL; + + rcu_read_lock(); + memtier = rcu_dereference(pgdat->memtier); + if (!memtier) + goto out; + + get_device(&memtier->dev); +out: + rcu_read_unlock(); + return memtier; +} + +void node_put_memory_tier(struct memory_tier *memtier) +{ + put_device(&memtier->dev); +} + #ifdef CONFIG_MIGRATION /** * next_demotion_node() - Get the next node in the demotion path @@ -546,7 +589,7 @@ static const struct attribute_group *memory_tier_attr_groups[] = { static int __init memory_tier_init(void) { - int ret; + int ret, node; struct memory_tier *memtier; ret = subsys_system_register(&memory_tier_subsys, memory_tier_attr_groups); @@ -567,7 +610,10 @@ static int __init memory_tier_init(void) __func__, PTR_ERR(memtier)); /* CPU only nodes are not part of memory tiers. */ - memtier->nodelist = node_states[N_MEMORY]; + for_each_node_state(node, N_MEMORY) { + rcu_assign_pointer(NODE_DATA(node)->memtier, memtier); + node_set(node, memtier->nodelist); + } mutex_unlock(&memory_tier_lock); migrate_on_reclaim_init();