From patchwork Mon Jul 4 07:06:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 12904773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8126C43334 for ; Mon, 4 Jul 2022 07:11:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A0B66B0075; Mon, 4 Jul 2022 03:11:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 850B2900003; Mon, 4 Jul 2022 03:11:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71A5E900002; Mon, 4 Jul 2022 03:11:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 63C2C6B0075 for ; Mon, 4 Jul 2022 03:11:24 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 7057561B37 for ; Mon, 4 Jul 2022 07:08:50 +0000 (UTC) X-FDA: 79648540020.01.2876372 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf29.hostedemail.com (Postfix) with ESMTP id A45CE1200B3 for ; Mon, 4 Jul 2022 07:08:49 +0000 (UTC) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2646bBEX016228; Mon, 4 Jul 2022 07:08:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=ihEfIiqN6xcfrPmWYtWVjteEcOUWklsx8zGbP+bZdjI=; b=obf7n+ROewIxczMjeDUXXeI91387JlbgUBH6VD0tdiQHkaNXpPhEQib5qaI8mk/tjYXC tk2bFj+H4ULxkz1DhQMjHaYh/rjw4VJ4ecTW1h5Oho1zUUc1n87xHeMlc9yxFOtZbwle 2ejP+Gtv00tHYNw+06KTQQKZYNdVib0aAU58G+PiX0D8vMVI5YTqa6TmamwhdiFruJHM E0wM9Xfz+jErrLtvBzKbhvaSjMjNj27Df+EauVgI4ET9Q/Hda8CrOmNnD6KcfSeVCgtR LgBUZoqDZO0l1mM164pU1iMiVi9UMaOEhoIiFr+lD1qkSGEjU7gkhyOaKrjojxaNCx6G Zw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3h3tvc0u6j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 04 Jul 2022 07:08:37 +0000 Received: from m0098404.ppops.net (m0098404.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2646uhPP022888; Mon, 4 Jul 2022 07:08:37 GMT Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3h3tvc0u62-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 04 Jul 2022 07:08:37 +0000 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 26476A1P007094; Mon, 4 Jul 2022 07:08:36 GMT Received: from b01cxnp22033.gho.pok.ibm.com (b01cxnp22033.gho.pok.ibm.com [9.57.198.23]) by ppma03dal.us.ibm.com with ESMTP id 3h2dn9hau7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 04 Jul 2022 07:08:36 +0000 Received: from b01ledav002.gho.pok.ibm.com (b01ledav002.gho.pok.ibm.com [9.57.199.107]) by b01cxnp22033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 26478ZhW34865504 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 4 Jul 2022 07:08:35 GMT Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 348C312405C; Mon, 4 Jul 2022 07:08:35 +0000 (GMT) Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 85078124058; Mon, 4 Jul 2022 07:08:28 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.74.198]) by b01ledav002.gho.pok.ibm.com (Postfix) with ESMTP; Mon, 4 Jul 2022 07:08:28 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Wei Xu , Huang Ying , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Johannes Weiner , jvgediya.oss@gmail.com, "Aneesh Kumar K.V" Subject: [PATCH v8 08/12] mm/demotion: Add pg_data_t member to track node memory tier details Date: Mon, 4 Jul 2022 12:36:08 +0530 Message-Id: <20220704070612.299585-9-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220704070612.299585-1-aneesh.kumar@linux.ibm.com> References: <20220704070612.299585-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: Jtk3tZHs4dXwEYmPJaxChmM0tzgqVuTJ X-Proofpoint-GUID: 6ijjkrJYh81HQ6HrLt_jriuGP5eUgHSO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-07-04_05,2022-06-28_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0 impostorscore=0 lowpriorityscore=0 priorityscore=1501 suspectscore=0 bulkscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000 definitions=main-2207040030 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656918529; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ihEfIiqN6xcfrPmWYtWVjteEcOUWklsx8zGbP+bZdjI=; b=4/j6qjL8tMltoGAO8FT13fzFARS7nX5ykgT3madbXx1jhAst2RmY1pls7qXABNLTwiYKPQ TkQG/Y/AUMwEkcjFkylcR4vMihZOQlPK6d+2cET0f1qh0Z3dnqCUqWAJk3Ss/7hIcAe/is k1B0lxcLsnvhf2ZMsP7aKTz5mSrBNmI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=obf7n+RO; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf29.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656918529; a=rsa-sha256; cv=none; b=5da2Wv80sgfRSrN3gUPPcOMhE1GFcTwrQQ0ZkACHYJnQrqlwPrRTfEkGwem39WR+5k6IXJ HaDHTnuPVyU8VCVnkD5g3xWWDZR6SGXK2CY4J5DpUUGViCll9ls4k0673TTKpftPOQnMcJ LZXUWnojU7HWurpJMae4YNrLi+diYew= Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=obf7n+RO; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf29.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A45CE1200B3 X-Stat-Signature: t6gox7a6aj8fqu617na88pce15ynrxpc X-Rspam-User: X-HE-Tag: 1656918529-271996 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also update different helpes to use NODE_DATA()->memtier. Since node specific memtier can change based on the reassignment of NUMA node to a different memory tiers, accessing NODE_DATA()->memtier needs to happen under an rcu read lock or memory_tier_lock. Signed-off-by: Aneesh Kumar K.V --- include/linux/memory-tiers.h | 11 ++++ include/linux/mmzone.h | 3 + mm/memory-tiers.c | 104 +++++++++++++++++++++++++---------- 3 files changed, 89 insertions(+), 29 deletions(-) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 453f6e5d357c..705b63ee31d5 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -6,6 +6,9 @@ #ifdef CONFIG_NUMA +#include +#include + #define MEMORY_TIER_HBM_GPU 300 #define MEMORY_TIER_DRAM 200 #define MEMORY_TIER_PMEM 100 @@ -13,6 +16,12 @@ #define DEFAULT_MEMORY_TIER MEMORY_TIER_DRAM #define MAX_MEMORY_TIER_ID 400 +struct memory_tier { + struct list_head list; + struct device dev; + nodemask_t nodelist; +}; + extern bool numa_demotion_enabled; int node_create_and_set_memory_tier(int node, int tier); #ifdef CONFIG_MIGRATION @@ -25,6 +34,8 @@ static inline int next_demotion_node(int node) #endif int node_get_memory_tier_id(int node); int node_update_memory_tier(int node, int tier); +struct memory_tier *node_get_memory_tier(int node); +void node_put_memory_tier(struct memory_tier *memtier); #else diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aab70355d64f..353812495a70 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -928,6 +928,9 @@ typedef struct pglist_data { /* Per-node vmstats */ struct per_cpu_nodestat __percpu *per_cpu_nodestats; atomic_long_t vm_stat[NR_VM_NODE_STAT_ITEMS]; +#ifdef CONFIG_NUMA + struct memory_tier __rcu *memtier; +#endif } pg_data_t; #define node_present_pages(nid) (NODE_DATA(nid)->node_present_pages) diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index b7cb368cb9c0..6a2476faf13a 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -1,22 +1,15 @@ // SPDX-License-Identifier: GPL-2.0 #include -#include -#include #include #include #include #include #include +#include #include #include "internal.h" -struct memory_tier { - struct list_head list; - struct device dev; - nodemask_t nodelist; -}; - struct demotion_nodes { nodemask_t preferred; }; @@ -120,7 +113,7 @@ static void memory_tier_device_release(struct device *dev) { struct memory_tier *tier = to_memory_tier(dev); - kfree(tier); + kfree_rcu(tier); } static void insert_memory_tier(struct memory_tier *memtier) @@ -176,13 +169,18 @@ static void unregister_memory_tier(struct memory_tier *memtier) static struct memory_tier *__node_get_memory_tier(int node) { - struct memory_tier *memtier; + pg_data_t *pgdat; - list_for_each_entry(memtier, &memory_tiers, list) { - if (node_isset(node, memtier->nodelist)) - return memtier; - } - return NULL; + pgdat = NODE_DATA(node); + if (!pgdat) + return NULL; + /* + * Since we hold memory_tier_lock, we can avoid + * RCU read locks when accessing the details. No + * parallel updates are possible here. + */ + return rcu_dereference_check(pgdat->memtier, + lockdep_is_held(&memory_tier_lock)); } static struct memory_tier *__get_memory_tier_from_id(int id) @@ -196,6 +194,33 @@ static struct memory_tier *__get_memory_tier_from_id(int id) return NULL; } +/* + * Called with memory_tier_lock. Hence the device references cannot + * be dropped during this function. + */ +static void memtier_node_set(int node, struct memory_tier *memtier) +{ + pg_data_t *pgdat; + struct memory_tier *current_memtier; + + pgdat = NODE_DATA(node); + if (!pgdat) + return; + /* + * Make sure we mark the memtier NULL before we assign the new memory tier + * to the NUMA node. This make sure that anybody looking at NODE_DATA + * finds a NULL memtier or the one which is still valid. + */ + current_memtier = rcu_dereference_check(pgdat->memtier, + lockdep_is_held(&memory_tier_lock)); + rcu_assign_pointer(pgdat->memtier, NULL); + if (current_memtier) + node_clear(node, current_memtier->nodelist); + synchronize_rcu(); + node_set(node, memtier->nodelist); + rcu_assign_pointer(pgdat->memtier, memtier); +} + static int __node_create_and_set_memory_tier(int node, int tier) { int ret = 0; @@ -209,7 +234,7 @@ static int __node_create_and_set_memory_tier(int node, int tier) goto out; } } - node_set(node, memtier->nodelist); + memtier_node_set(node, memtier); out: return ret; } @@ -231,14 +256,7 @@ int node_create_and_set_memory_tier(int node, int tier) if (current_tier->dev.id == tier) goto out; - node_clear(node, current_tier->nodelist); - ret = __node_create_and_set_memory_tier(node, tier); - if (ret) { - /* reset it back to older tier */ - node_set(node, current_tier->nodelist); - goto out; - } if (nodes_empty(current_tier->nodelist)) unregister_memory_tier(current_tier); @@ -260,7 +278,7 @@ static int __node_set_memory_tier(int node, int tier) ret = -EINVAL; goto out; } - node_set(node, memtier->nodelist); + memtier_node_set(node, memtier); out: return ret; } @@ -316,10 +334,7 @@ int node_update_memory_tier(int node, int tier) if (!current_tier || current_tier->dev.id == tier) goto out; - node_clear(node, current_tier->nodelist); - ret = __node_create_and_set_memory_tier(node, tier); - if (nodes_empty(current_tier->nodelist)) unregister_memory_tier(current_tier); @@ -330,6 +345,34 @@ int node_update_memory_tier(int node, int tier) return ret; } +/* + * lockless access to memory tier of a NUMA node. + */ +struct memory_tier *node_get_memory_tier(int node) +{ + pg_data_t *pgdat; + struct memory_tier *memtier; + + pgdat = NODE_DATA(node); + if (!pgdat) + return NULL; + + rcu_read_lock(); + memtier = rcu_dereference(pgdat->memtier); + if (!memtier) + goto out; + + get_device(&memtier->dev); +out: + rcu_read_unlock(); + return memtier; +} + +void node_put_memory_tier(struct memory_tier *memtier) +{ + put_device(&memtier->dev); +} + #ifdef CONFIG_MIGRATION /** * next_demotion_node() - Get the next node in the demotion path @@ -546,7 +589,7 @@ static const struct attribute_group *memory_tier_attr_groups[] = { static int __init memory_tier_init(void) { - int ret; + int ret, node; struct memory_tier *memtier; ret = subsys_system_register(&memory_tier_subsys, memory_tier_attr_groups); @@ -567,7 +610,10 @@ static int __init memory_tier_init(void) __func__, PTR_ERR(memtier)); /* CPU only nodes are not part of memory tiers. */ - memtier->nodelist = node_states[N_MEMORY]; + for_each_node_state(node, N_MEMORY) { + rcu_assign_pointer(NODE_DATA(node)->memtier, memtier); + node_set(node, memtier->nodelist); + } mutex_unlock(&memory_tier_lock); migrate_on_reclaim_init();