From patchwork Tue Oct 27 06:32:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70B7B921 for ; Tue, 27 Oct 2020 06:33:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E8F202080A for ; Tue, 27 Oct 2020 06:33:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8F202080A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EBC9B6B005D; Tue, 27 Oct 2020 02:32:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E48F36B0062; Tue, 27 Oct 2020 02:32:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6ECB6B006C; Tue, 27 Oct 2020 02:32:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 902986B005D for ; Tue, 27 Oct 2020 02:32:59 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1363A181AC9C6 for ; Tue, 27 Oct 2020 06:32:59 +0000 (UTC) X-FDA: 77416737678.16.title52_13092452727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id E23B6100E690B for ; Tue, 27 Oct 2020 06:32:58 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30045:30051:30054:30064:30070:30074:30090,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04y816hug68wy7695fcbfiz69xbr4ock8ptsod9sqyt11zw4nzk7jdhfo6uferm.imubx8xgz58j551jtun5gi1bikybfkbexzi631he1s96syj3q7orwjkhteizmho.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: title52_13092452727a X-Filterd-Recvd-Size: 12137 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:32:57 +0000 (UTC) IronPort-SDR: q/eXlGsdDeOcrEG00FDhKSpx5FLiXLVIO1aBijWBdXBwVGwM4in604UhZ9tiRbyESW+z9M+ilV 0UDULPKnfHAA== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221163" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221163" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:32:57 -0700 IronPort-SDR: A6kp+fotA24xEG+2T/Y0wgzk+Szd0DZTmHTY7L6K1JOnHOVnTf4705ktCfSa9yxZGnE18qntQ2 ukVi/9aMlVzA== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666689" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:32:54 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams Subject: [RFC -V4 1/6] autonuma: Optimize page placement for memory tiering system Date: Tue, 27 Oct 2020 14:32:12 +0800 Message-Id: <20201027063217.211096-2-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With the advent of various new memory types, some machines will have multiple types of memory, e.g. DRAM and PMEM (persistent memory). The memory subsystem of these machines can be called memory tiering system, because the performance of the different types of memory are usually different. In such system, because of the memory accessing pattern changing etc, some pages in the slow memory may become hot globally. So in this patch, the AutoNUMA mechanism is enhanced to optimize the page placement among the different memory types according to hot/cold dynamically. In a typical memory tiering system, there are CPUs, fast memory and slow memory in each physical NUMA node. The CPUs and the fast memory will be put in one logical node (called fast memory node), while the slow memory will be put in another (faked) logical node (called slow memory node). That is, the fast memory is regarded as local while the slow memory is regarded as remote. So it's possible for the recently accessed pages in the slow memory node to be promoted to the fast memory node via the existing AutoNUMA mechanism. The original AutoNUMA mechanism will stop to migrate pages if the free memory of the target node will become below the high watermark. This is a reasonable policy if there's only one memory type. But this makes the original AutoNUMA mechanism almost not work to optimize page placement among different memory types. Details are as follows. It's the common cases that the working-set size of the workload is larger than the size of the fast memory nodes. Otherwise, it's unnecessary to use the slow memory at all. So in the common cases, there are almost always no enough free pages in the fast memory nodes, so that the globally hot pages in the slow memory node cannot be promoted to the fast memory node. To solve the issue, we have 2 choices as follows, a. Ignore the free pages watermark checking when promoting hot pages from the slow memory node to the fast memory node. This will create some memory pressure in the fast memory node, thus trigger the memory reclaiming. So that, the cold pages in the fast memory node will be demoted to the slow memory node. b. Make kswapd of the fast memory node to reclaim pages until the free pages are a little more (about 10MB) than the high watermark. Then, if the free pages of the fast memory node reaches high watermark, and some hot pages need to be promoted, kswapd of the fast memory node will be waken up to demote some cold pages in the fast memory node to the slow memory node. This will free some extra space in the fast memory node, so the hot pages in the slow memory node can be promoted to the fast memory node. The choice "a" will create the memory pressure in the fast memory node. If the memory pressure of the workload is high, the memory pressure may become so high that the memory allocation latency of the workload is influenced, e.g. the direct reclaiming may be triggered. The choice "b" works much better at this aspect. If the memory pressure of the workload is high, the hot pages promotion will stop earlier because its allocation watermark is higher than that of the normal memory allocation. So in this patch, choice "b" is implemented. In addition to the original page placement optimization among sockets, the AutoNUMA mechanism is extended to be used to optimize page placement according to hot/cold among different memory types. So the sysctl user space interface (numa_balancing) is extended in a backward compatible way as follow, so that the users can enable/disable these functionality individually. The sysctl is converted from a Boolean value to a bits field. The definition of the flags is, - 0x0: NUMA_BALANCING_DISABLED - 0x1: NUMA_BALANCING_NORMAL - 0x2: NUMA_BALANCING_MEMORY_TIERING TODO: - Update ABI document: Documentation/sysctl/kernel.txt Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/sched/sysctl.h | 5 +++++ kernel/sched/core.c | 9 +++------ kernel/sysctl.c | 7 ++++--- mm/migrate.c | 30 +++++++++++++++++++++++++++--- mm/vmscan.c | 15 +++++++++++++++ 5 files changed, 54 insertions(+), 12 deletions(-) diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 3c31ba88aca5..9d85450bc30a 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -39,6 +39,11 @@ enum sched_tunable_scaling { }; extern enum sched_tunable_scaling sysctl_sched_tunable_scaling; +#define NUMA_BALANCING_DISABLED 0x0 +#define NUMA_BALANCING_NORMAL 0x1 +#define NUMA_BALANCING_MEMORY_TIERING 0x2 + +extern int sysctl_numa_balancing_mode; extern unsigned int sysctl_numa_balancing_scan_delay; extern unsigned int sysctl_numa_balancing_scan_period_min; extern unsigned int sysctl_numa_balancing_scan_period_max; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2d95dc3f4644..bccf8eacc9b7 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3106,6 +3106,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) } DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); +int sysctl_numa_balancing_mode; #ifdef CONFIG_NUMA_BALANCING @@ -3121,20 +3122,16 @@ void set_numabalancing_state(bool enabled) int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { - struct ctl_table t; int err; - int state = static_branch_likely(&sched_numa_balancing); if (write && !capable(CAP_SYS_ADMIN)) return -EPERM; - t = *table; - t.data = &state; - err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); + err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (err < 0) return err; if (write) - set_numabalancing_state(state); + set_numabalancing_state(*(int *)table->data); return err; } #endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index afad085960b8..7d5f12d86489 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -113,6 +113,7 @@ static int sixty = 60; static int __maybe_unused neg_one = -1; static int __maybe_unused two = 2; +static int __maybe_unused three = 3; static int __maybe_unused four = 4; static unsigned long zero_ul; static unsigned long one_ul = 1; @@ -1755,12 +1756,12 @@ static struct ctl_table kern_table[] = { }, { .procname = "numa_balancing", - .data = NULL, /* filled in by handler */ - .maxlen = sizeof(unsigned int), + .data = &sysctl_numa_balancing_mode, + .maxlen = sizeof(int), .mode = 0644, .proc_handler = sysctl_numa_balancing, .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE, + .extra2 = &three, }, #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_SCHED_DEBUG */ diff --git a/mm/migrate.c b/mm/migrate.c index fc6ee1b183d8..2079a69b40ed 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -2038,13 +2039,36 @@ static struct page *alloc_misplaced_dst_page(struct page *page, static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { - int page_lru; + int page_lru, nr = compound_nr(page), order = compound_order(page); - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); /* Avoid migrating to a node that is nearly full */ - if (!migrate_balanced_pgdat(pgdat, compound_nr(page))) + if (!migrate_balanced_pgdat(pgdat, nr)) { + int migration_node, z; + pg_data_t *migration_pgdat; + + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || + !(node_reclaim_mode & RECLAIM_MIGRATE)) + return 0; + /* + * The slow memory node need to have enough + * free pages to demote the cold pages in the + * fast memory node to it. + */ + migration_node = next_demotion_node(pgdat->node_id); + if (migration_node == NUMA_NO_NODE) + return 0; + migration_pgdat = NODE_DATA(migration_node); + if (!migrate_balanced_pgdat(migration_pgdat, nr)) + return 0; + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + if (populated_zone(pgdat->node_zones + z)) + break; + } + wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); return 0; + } if (isolate_lru_page(page)) return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 4a4c9941c969..0a54ad7e74cb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -58,6 +58,7 @@ #include #include +#include #include "internal.h" @@ -3542,6 +3543,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx) return false; } +/* + * Keep the free pages on fast memory node a little more than the high + * watermark to accommodate the promoted pages. + */ +#define NUMA_BALANCING_ADDON_WATERMARK (10UL * 1024 * 1024 >> PAGE_SHIFT) + /* * Returns true if there is an eligible zone balanced for the request order * and highest_zoneidx @@ -3563,6 +3570,14 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) continue; mark = high_wmark_pages(zone); + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) { + unsigned long addon_mark; + + addon_mark = min(NUMA_BALANCING_ADDON_WATERMARK, + pgdat->node_present_pages >> 6); + mark += addon_mark; + } if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) return true; } From patchwork Tue Oct 27 06:32:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859471 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E314921 for ; Tue, 27 Oct 2020 06:33:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54A042080A for ; Tue, 27 Oct 2020 06:33:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54A042080A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 399136B0062; Tue, 27 Oct 2020 02:33:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 321C26B006C; Tue, 27 Oct 2020 02:33:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C0316B006E; Tue, 27 Oct 2020 02:33:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id DE2C66B0062 for ; Tue, 27 Oct 2020 02:33:03 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6C8E61EE6 for ; Tue, 27 Oct 2020 06:33:03 +0000 (UTC) X-FDA: 77416737846.09.copy42_59019f22727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 50D2E180AD807 for ; Tue, 27 Oct 2020 06:33:03 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30034:30054:30064:30070,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04y8p7eby94cnf3ae69gczi4xta99ocbfw9t9aamigegsdzrz1ypnxgnxxt4af4.zgyehohfo7csbrumihhe1bqe7ohkbjuwe7sj5jpf5azbif16jc7kimpx387mac3.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: copy42_59019f22727a X-Filterd-Recvd-Size: 6154 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:33:02 +0000 (UTC) IronPort-SDR: d8ZqoJNpqBA+4D6OWdp0R4014l2dIiYxyhsUHGMDPKoE1ze+2kqpcC2wCi4OHPvUb0DikaRYIX xOqH4DSRx7Ww== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221166" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221166" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:00 -0700 IronPort-SDR: FIfBr9SXYyX/GFx/a6+3sga3IVJNnepT8d8YUGlJ9WHk7AOwhn+HQWHBYZ20K/j91IH57IBz3l Yzrrj+JsKP1w== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666696" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:32:57 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Dave Hansen , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dan Williams Subject: [RFC -V4 2/6] autonuma, memory tiering: Skip to scan fast memory Date: Tue, 27 Oct 2020 14:32:13 +0800 Message-Id: <20201027063217.211096-3-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the AutoNUMA isn't used to optimize the page placement among sockets but only among memory types, the hot pages in the fast memory node couldn't be migrated (promoted) to anywhere. So it's unnecessary to scan the pages in the fast memory node via changing their PTE/PMD mapping to be PROT_NONE. So that the page faults could be avoided too. In the test, if only the memory tiering AutoNUMA mode is enabled, the number of the AutoNUMA hint faults for the DRAM node is reduced to almost 0 with the patch. While the benchmark score doesn't change visibly. Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/node.h | 5 +++++ mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 13 ++++++++++++- 3 files changed, 38 insertions(+), 10 deletions(-) diff --git a/include/linux/node.h b/include/linux/node.h index f7a539390c81..ac0e7a45edff 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -189,4 +189,9 @@ static inline int next_demotion_node(int node) } #endif +static inline bool node_is_toptier(int node) +{ + return node_state(node, N_CPU); +} + #endif /* _LINUX_NODE_H_ */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index faadc449cca5..a8c2ddd0cc4a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1795,17 +1796,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, } #endif - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; + page = pmd_page(*pmd); + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(page_to_nid(page))) + goto unlock; + } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED diff --git a/mm/mprotect.c b/mm/mprotect.c index ce8b8a5eacbb..8abec0c267fa 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, */ if (prot_numa) { struct page *page; + int nid; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node == page_to_nid(page)) + nid = page_to_nid(page); + if (target_node == nid) + continue; + + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(nid)) continue; } From patchwork Tue Oct 27 06:32:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92ACF921 for ; Tue, 27 Oct 2020 06:33:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3964B2084C for ; Tue, 27 Oct 2020 06:33:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3964B2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C22BC6B006C; Tue, 27 Oct 2020 02:33:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BACAE6B006E; Tue, 27 Oct 2020 02:33:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4F066B0070; Tue, 27 Oct 2020 02:33:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 6B2F06B006C for ; Tue, 27 Oct 2020 02:33:06 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F34658249980 for ; Tue, 27 Oct 2020 06:33:05 +0000 (UTC) X-FDA: 77416737930.01.kiss00_5c048072727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id CEE8810047E00 for ; Tue, 27 Oct 2020 06:33:05 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30003:30034:30036:30054:30055:30064:30070:30075:30090,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04y87o6jaif7ioyn6u7cynhyxf1dgypw1s7ebk6ihg5epwgp1ojrjqxdrzgqm4i.bo5fqb8dyi37e7ejyu9ixqetbj4godfzgnprjenzrhioirdsnizsz186thcyj5o.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: kiss00_5c048072727a X-Filterd-Recvd-Size: 18430 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:33:04 +0000 (UTC) IronPort-SDR: zhrRQphfn43VoK7Lp0IRBJt0cr4wYuQJzpkrM5D4gd5iA0M13IkpxMezFVNY7L9jhKjBhMpzsg TbyN9R/x27/Q== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221169" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221169" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:03 -0700 IronPort-SDR: h03juRq+nL3ikTWPxIVyl7QUWPAaSSyMSCH9QEpA/Bfb3QTdbclnx/6xNCdaoDC44LJtT+gRne ei0pNdvD2Bcw== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666715" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:00 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams Subject: [RFC -V4 3/6] autonuma, memory tiering: Hot page selection with hint page fault latency Date: Tue, 27 Oct 2020 14:32:14 +0800 Message-Id: <20201027063217.211096-4-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To optimize page placement in a memory tiering system with AutoNUMA, the hot pages in the slow memory node need to be identified. Essentially, the original AutoNUMA implementation selects the mostly recently accessed (MRU) pages as the hot pages. But this isn't a very good algorithm to identify the hot pages. So, in this patch we implemented a better hot page selection algorithm. Which is based on AutoNUMA page table scanning and hint page fault as follows, - When the page tables of the processes are scanned to change PTE/PMD to be PROT_NONE, the current time is recorded in struct page as scan time. - When the page is accessed, hint page fault will occur. The scan time is gotten from the struct page. And The hint page fault latency is defined as hint page fault time - scan time The shorter the hint page fault latency of a page is, the higher the probability of their access frequency to be higher. So the hint page fault latency is a good estimation of the page hot/cold. But it's hard to find some extra space in struct page to hold the scan time. Fortunately, we can reuse some bits used by the original AutoNUMA. AutoNUMA uses some bits in struct page to store the page accessing CPU and PID (referring to page_cpupid_xchg_last()). Which is used by the multi-stage node selection algorithm to avoid to migrate pages shared accessed by the NUMA nodes back and forth. But for pages in the slow memory node, even if they are shared accessed by multiple NUMA nodes, as long as the pages are hot, they need to be promoted to the fast memory node. So the accessing CPU and PID information are unnecessary for the slow memory pages. We can reuse these bits in struct page to record the scan time for them. For the fast memory pages, these bits are used as before. The remaining problem is how to determine the hot threshold. It's not easy to be done automatically. So we provide a sysctl knob: kernel.numa_balancing_hot_threshold_ms. All pages with hint page fault latency < the threshold will be considered hot. The system administrator can determine the hot threshold via various information, such as PMEM bandwidth limit, the average number of the pages pass the hot threshold, etc. The default hot threshold is 1 second, which works well in our performance test. The patch improves the score of pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution by 11.1% with 40.4% less pages promoted (that is, less overhead) on a 2 socket Intel server with Optance DC Persistent Memory. The downside of the patch is that the response time to the workload hot spot changing may be much longer. For example, - A previous cold memory area becomes hot - The hint page fault will be triggered. But the hint page fault latency isn't shorter than the hot threshold. So the pages will not be promoted. - When the memory area is scanned again, maybe after a scan period, the hint page fault latency measured will be shorter than the hot threshold and the pages will be promoted. To mitigate this, - If there are enough free space in the fast memory node, the hot threshold will not be used, all pages will be promoted upon the hint page fault for fast response. - If fast response is more important for system performance, the administrator can set a higher hot threshold. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm.h | 29 ++++++++++++++++ include/linux/sched/sysctl.h | 1 + kernel/sched/fair.c | 67 ++++++++++++++++++++++++++++++++++++ kernel/sysctl.c | 7 ++++ mm/huge_memory.c | 13 +++++-- mm/memory.c | 11 +++++- mm/migrate.c | 12 +++++++ mm/mmzone.c | 17 +++++++++ mm/mprotect.c | 8 ++++- 9 files changed, 160 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b2f370f0b420..5dde5d08ccdd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1296,6 +1296,18 @@ static inline int page_to_nid(const struct page *page) #endif #ifdef CONFIG_NUMA_BALANCING +/* page access time bits needs to hold at least 4 seconds */ +#define PAGE_ACCESS_TIME_MIN_BITS 12 +#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS +#define PAGE_ACCESS_TIME_BUCKETS \ + (PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT) +#else +#define PAGE_ACCESS_TIME_BUCKETS 0 +#endif + +#define PAGE_ACCESS_TIME_MASK \ + (LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS) + static inline int cpu_pid_to_cpupid(int cpu, int pid) { return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK); @@ -1338,6 +1350,16 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid) return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK); } +static inline unsigned int xchg_page_access_time(struct page *page, + unsigned int time) +{ + unsigned int last_time; + + last_time = xchg(&page->_last_cpupid, + (time >> PAGE_ACCESS_TIME_BUCKETS) & LAST_CPUPID_MASK); + return last_time << PAGE_ACCESS_TIME_BUCKETS; +} + static inline int page_cpupid_last(struct page *page) { return page->_last_cpupid; @@ -1353,6 +1375,7 @@ static inline int page_cpupid_last(struct page *page) } extern int page_cpupid_xchg_last(struct page *page, int cpupid); +extern unsigned int xchg_page_access_time(struct page *page, unsigned int time); static inline void page_cpupid_reset_last(struct page *page) { @@ -1365,6 +1388,12 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid) return page_to_nid(page); /* XXX */ } +static inline unsigned int xchg_page_access_time(struct page *page, + unsigned int time) +{ + return 0; +} + static inline int page_cpupid_last(struct page *page) { return page_to_nid(page); /* XXX */ diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 9d85450bc30a..574d25d6f051 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -48,6 +48,7 @@ extern unsigned int sysctl_numa_balancing_scan_delay; extern unsigned int sysctl_numa_balancing_scan_period_min; extern unsigned int sysctl_numa_balancing_scan_period_max; extern unsigned int sysctl_numa_balancing_scan_size; +extern unsigned int sysctl_numa_balancing_hot_threshold; #ifdef CONFIG_SCHED_DEBUG extern __read_mostly unsigned int sysctl_sched_migration_cost; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1a68a0536add..fafb312c1b24 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1074,6 +1074,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256; /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */ unsigned int sysctl_numa_balancing_scan_delay = 1000; +/* The page with hint page fault latency < threshold in ms is considered hot */ +unsigned int sysctl_numa_balancing_hot_threshold = 1000; + struct numa_group { refcount_t refcount; @@ -1414,6 +1417,37 @@ static inline unsigned long group_weight(struct task_struct *p, int nid, return 1000 * faults / total_faults; } +static bool pgdat_free_space_enough(struct pglist_data *pgdat) +{ + int z; + unsigned long enough_mark; + + enough_mark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT, + pgdat->node_present_pages >> 4); + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + struct zone *zone = pgdat->node_zones + z; + + if (!populated_zone(zone)) + continue; + + if (zone_watermark_ok(zone, 0, + high_wmark_pages(zone) + enough_mark, + ZONE_MOVABLE, 0)) + return true; + } + return false; +} + +static int numa_hint_fault_latency(struct page *page) +{ + unsigned int last_time, time; + + time = jiffies_to_msecs(jiffies); + last_time = xchg_page_access_time(page, time); + + return (time - last_time) & PAGE_ACCESS_TIME_MASK; +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1421,6 +1455,27 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int dst_nid = cpu_to_node(dst_cpu); int last_cpupid, this_cpupid; + /* + * The pages in slow memory node should be migrated according + * to hot/cold instead of accessing CPU node. + */ + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !node_is_toptier(src_nid)) { + struct pglist_data *pgdat; + unsigned long latency, th; + + pgdat = NODE_DATA(dst_nid); + if (pgdat_free_space_enough(pgdat)) + return true; + + th = sysctl_numa_balancing_hot_threshold; + latency = numa_hint_fault_latency(page); + if (latency > th) + return false; + + return true; + } + this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); last_cpupid = page_cpupid_xchg_last(page, this_cpupid); @@ -2634,6 +2689,11 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) if (!p->mm) return; + /* Numa faults statistics are unnecessary for the slow memory node */ + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !node_is_toptier(mem_node)) + return; + /* Allocate buffer to track faults on a per-node basis */ if (unlikely(!p->numa_faults)) { int size = sizeof(*p->numa_faults) * @@ -2653,6 +2713,13 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) */ if (unlikely(last_cpupid == (-1 & LAST_CPUPID_MASK))) { priv = 1; + } else if (unlikely(!cpu_online(cpupid_to_cpu(last_cpupid)))) { + /* + * In memory tiering mode, cpupid of slow memory page is + * used to record page access time, so its value may be + * invalid during numa balancing mode transition. + */ + return; } else { priv = cpupid_match_pid(p, last_cpupid); if (!priv && !(flags & TNF_NO_GROUP)) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 7d5f12d86489..1efbb0ac73c7 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1754,6 +1754,13 @@ static struct ctl_table kern_table[] = { .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ONE, }, + { + .procname = "numa_balancing_hot_threshold_ms", + .data = &sysctl_numa_balancing_hot_threshold, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec, + }, { .procname = "numa_balancing", .data = &sysctl_numa_balancing_mode, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a8c2ddd0cc4a..4e46d838257d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1379,7 +1379,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; int page_nid = NUMA_NO_NODE, this_nid = numa_node_id(); - int target_nid, last_cpupid = -1; + int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); bool page_locked; bool migrated = false; bool was_writable; @@ -1406,7 +1406,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) page = pmd_page(pmd); BUG_ON(is_huge_zero_page(page)); page_nid = page_to_nid(page); - last_cpupid = page_cpupid_last(page); + if (node_is_toptier(page_nid)) + last_cpupid = page_cpupid_last(page); count_vm_numa_event(NUMA_HINT_FAULTS); if (page_nid == this_nid) { count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); @@ -1798,6 +1799,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, if (prot_numa) { struct page *page; + bool toptier; /* * Avoid trapping faults against the zero page. The read-only * data is likely to be read-cached on the local CPU and @@ -1810,13 +1812,18 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, goto unlock; page = pmd_page(*pmd); + toptier = node_is_toptier(page_to_nid(page)); /* * Skip scanning top tier node if normal numa * balancing is disabled */ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - node_is_toptier(page_to_nid(page))) + toptier) goto unlock; + + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !toptier) + xchg_page_access_time(page, jiffies_to_msecs(jiffies)); } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical diff --git a/mm/memory.c b/mm/memory.c index 469af373ae76..84c5383fae51 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -74,6 +74,7 @@ #include #include #include +#include #include @@ -4063,8 +4064,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) flags |= TNF_SHARED; - last_cpupid = page_cpupid_last(page); page_nid = page_to_nid(page); + /* + * In memory tiering mode, cpupid of slow memory page is used + * to record page access time. So use default value. + */ + if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + !node_is_toptier(page_nid)) + last_cpupid = (-1 & LAST_CPUPID_MASK); + else + last_cpupid = page_cpupid_last(page); target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid, &flags); pte_unmap_unlock(vmf->pte, vmf->ptl); diff --git a/mm/migrate.c b/mm/migrate.c index 2079a69b40ed..d8d80b111a6a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -641,6 +641,18 @@ void migrate_page_states(struct page *newpage, struct page *page) * future migrations of this same page. */ cpupid = page_cpupid_xchg_last(page, -1); + /* + * If migrate between slow and fast memory node, reset cpupid, + * because that is used to record page access time in slow + * memory node + */ + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) { + bool f_toptier = node_is_toptier(page_to_nid(page)); + bool t_toptier = node_is_toptier(page_to_nid(newpage)); + + if (f_toptier != t_toptier) + cpupid = -1; + } page_cpupid_xchg_last(newpage, cpupid); ksm_migrate_page(newpage, page); diff --git a/mm/mmzone.c b/mm/mmzone.c index 4686fdc23bb9..aa94ec7176ed 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -112,4 +112,21 @@ int page_cpupid_xchg_last(struct page *page, int cpupid) return last_cpupid; } + +unsigned int xchg_page_access_time(struct page *page, unsigned int time) +{ + unsigned long old_flags, flags; + unsigned int last_time; + + time >>= PAGE_ACCESS_TIME_BUCKETS; + do { + old_flags = flags = page->flags; + last_time = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK; + + flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT); + flags |= (time & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT; + } while (unlikely(cmpxchg(&page->flags, old_flags, flags) != old_flags)); + + return last_time << PAGE_ACCESS_TIME_BUCKETS; +} #endif diff --git a/mm/mprotect.c b/mm/mprotect.c index 8abec0c267fa..7c617bed45ee 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -85,6 +85,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, if (prot_numa) { struct page *page; int nid; + bool toptier; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -114,14 +115,19 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, nid = page_to_nid(page); if (target_node == nid) continue; + toptier = node_is_toptier(nid); /* * Skip scanning top tier node if normal numa * balancing is disabled */ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && - node_is_toptier(nid)) + toptier) continue; + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !toptier) + xchg_page_access_time(page, + jiffies_to_msecs(jiffies)); } oldpte = ptep_modify_prot_start(vma, addr, pte); From patchwork Tue Oct 27 06:32:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA48B921 for ; Tue, 27 Oct 2020 06:33:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC2F920B1F for ; Tue, 27 Oct 2020 06:33:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC2F920B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 373196B006E; Tue, 27 Oct 2020 02:33:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2F9B16B0070; Tue, 27 Oct 2020 02:33:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 125786B0071; Tue, 27 Oct 2020 02:33:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id D3BB36B006E for ; Tue, 27 Oct 2020 02:33:08 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 865CA1EE6 for ; Tue, 27 Oct 2020 06:33:08 +0000 (UTC) X-FDA: 77416738056.02.power70_630c7582727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 6D9D410097AA1 for ; Tue, 27 Oct 2020 06:33:08 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30003:30034:30054:30055:30064:30090,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04yrcqc894pxgat6rukqyru8f6gnuycbxqr7xx4x847gew6945ketbjhuo5cn8k.dag8snhanjr8zdmmu9umknudyaz3prpdbfzpinwsitiyosxjyh5hkzm8876yttp.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: power70_630c7582727a X-Filterd-Recvd-Size: 8437 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:33:07 +0000 (UTC) IronPort-SDR: MU1Jm+5DRVD15C78XS6xB4JeH8pjTYS+6co42b1WN7UHEVNL4HQ0IhHH0gSrqumzahPsCvcxaA 8tH7zRPnkY2g== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221182" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221182" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:07 -0700 IronPort-SDR: xsy3w3Vg1KY4QvkkR4R2x4h9yK/QizgeBBCI4R/HuBsQM7NOUvLPqwoMSlw5yp94d6fguEJwrb psaVrJYNFzRA== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666726" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:04 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams Subject: [RFC -V4 4/6] autonuma, memory tiering: Rate limit NUMA migration throughput Date: Tue, 27 Oct 2020 14:32:15 +0800 Message-Id: <20201027063217.211096-5-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In AutoNUMA memory tiering mode, the hot slow memory pages could be promoted to the fast memory node via AutoNUMA. But this incurs some overhead too. So that sometimes the workload performance may be hurt. To avoid too much disturbing to the workload in these situations, we should make it possible to rate limit the promotion throughput. So, in this patch, we implement a simple rate limit algorithm as follows. The number of the candidate pages to be promoted to the fast memory node via AutoNUMA is counted, if the count exceeds the limit specified by the users, the AutoNUMA promotion will be stopped until the next second. Test the patch with the pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution on a 2 socket Intel server with Optane DC Persistent Memory Model. In the test, the page promotion throughput decreases 50.0% (from 211.7 MB/s to 105.9 MB/s) with the patch, while the benchmark score decreases only 3.3%. A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for the users to specify the limit. TODO: Add ABI document for new sysctl knob. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 7 +++++++ include/linux/sched/sysctl.h | 6 ++++++ kernel/sched/fair.c | 29 +++++++++++++++++++++++++++-- kernel/sysctl.c | 8 ++++++++ mm/vmstat.c | 3 +++ 5 files changed, 51 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8379432f4f2f..a4fbd711d5e9 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -206,6 +206,9 @@ enum node_stat_item { NR_KERNEL_STACK_KB, /* measured in KiB */ #if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) NR_KERNEL_SCS_KB, /* measured in KiB */ +#endif +#ifdef CONFIG_NUMA_BALANCING + PGPROMOTE_CANDIDATE, /* candidate pages to promote */ #endif NR_VM_NODE_STAT_ITEMS }; @@ -771,6 +774,10 @@ typedef struct pglist_data { struct deferred_split deferred_split_queue; #endif +#ifdef CONFIG_NUMA_BALANCING + unsigned long numa_ts; + unsigned long numa_nr_candidate; +#endif /* Fields commonly accessed by the page reclaim scanner */ /* diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 574d25d6f051..c0cae68e5da0 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -50,6 +50,12 @@ extern unsigned int sysctl_numa_balancing_scan_period_max; extern unsigned int sysctl_numa_balancing_scan_size; extern unsigned int sysctl_numa_balancing_hot_threshold; +#ifdef CONFIG_NUMA_BALANCING +extern unsigned int sysctl_numa_balancing_rate_limit; +#else +#define sysctl_numa_balancing_rate_limit 0 +#endif + #ifdef CONFIG_SCHED_DEBUG extern __read_mostly unsigned int sysctl_sched_migration_cost; extern __read_mostly unsigned int sysctl_sched_nr_migrate; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fafb312c1b24..e84ae9db7e3c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1076,6 +1076,11 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000; /* The page with hint page fault latency < threshold in ms is considered hot */ unsigned int sysctl_numa_balancing_hot_threshold = 1000; +/* + * Restrict the NUMA migration per second in MB for each target node + * if no enough free space in target node + */ +unsigned int sysctl_numa_balancing_rate_limit = 65536; struct numa_group { refcount_t refcount; @@ -1448,6 +1453,23 @@ static int numa_hint_fault_latency(struct page *page) return (time - last_time) & PAGE_ACCESS_TIME_MASK; } +static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, + unsigned long rate_limit, int nr) +{ + unsigned long nr_candidate; + unsigned long now = jiffies, last_ts; + + mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr); + nr_candidate = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + last_ts = pgdat->numa_ts; + if (now > last_ts + HZ && + cmpxchg(&pgdat->numa_ts, last_ts, now) == last_ts) + pgdat->numa_nr_candidate = nr_candidate; + if (nr_candidate - pgdat->numa_nr_candidate > rate_limit) + return false; + return true; +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1462,7 +1484,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long latency, th; + unsigned long rate_limit, latency, th; pgdat = NODE_DATA(dst_nid); if (pgdat_free_space_enough(pgdat)) @@ -1473,7 +1495,10 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (latency > th) return false; - return true; + rate_limit = + sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + return numa_migration_check_rate_limit(pgdat, rate_limit, + thp_nr_pages(page)); } this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 1efbb0ac73c7..4b25fab849ae 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1761,6 +1761,14 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = proc_dointvec, }, + { + .procname = "numa_balancing_rate_limit_mbps", + .data = &sysctl_numa_balancing_rate_limit, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, { .procname = "numa_balancing", .data = &sysctl_numa_balancing_mode, diff --git a/mm/vmstat.c b/mm/vmstat.c index f427d2d87742..2e60cc0232ef 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1215,6 +1215,9 @@ const char * const vmstat_text[] = { #if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) "nr_shadow_call_stack", #endif +#ifdef CONFIG_NUMA_BALANCING + "pgpromote_candidate", +#endif /* enum writeback_stat_item counters */ "nr_dirty_threshold", From patchwork Tue Oct 27 06:32:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB368921 for ; Tue, 27 Oct 2020 06:33:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7EB842084C for ; Tue, 27 Oct 2020 06:33:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7EB842084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 349676B0070; Tue, 27 Oct 2020 02:33:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2D2676B0071; Tue, 27 Oct 2020 02:33:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 151756B0072; Tue, 27 Oct 2020 02:33:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id D29436B0070 for ; Tue, 27 Oct 2020 02:33:11 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 71C17180AD806 for ; Tue, 27 Oct 2020 06:33:11 +0000 (UTC) X-FDA: 77416738182.30.hook73_0111f842727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 538FB180B3C8E for ; Tue, 27 Oct 2020 06:33:11 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30003:30034:30054:30055:30064:30070,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04ygkmbjtictxga1rno17efg9pz8aycjpis79fa9n7sfmtkd9w89zqe8etcwdkg.pb1w9qgdnc6e8xryq5ffbfjtrzhwssis65weikzrjityu8i5jzwe7cb9t1mg8is.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: hook73_0111f842727a X-Filterd-Recvd-Size: 7145 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:33:10 +0000 (UTC) IronPort-SDR: 6d6yyx1NKLnQUNkld6ozOL/ODZu5H1Lud/ybFkLbExcONM94j7/+19UzoVj+59W+Mfd6gcsPKC dQuyISqpNVSw== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221186" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221186" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:10 -0700 IronPort-SDR: ZWP0aS7RNy3JTzZVBLPJsOczRvuN1HoALtS7e7PFqCBU7KJhs5hp4Fa746ehgnhdE16vWlrXyH V2wsRRW355zw== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666746" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:07 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams Subject: [RFC -V4 5/6] autonuma, memory tiering: Adjust hot threshold automatically Date: Tue, 27 Oct 2020 14:32:16 +0800 Message-Id: <20201027063217.211096-6-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It isn't easy for the administrator to determine the hot threshold. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in AutoNUMA, but the hot/cold distribution isn't uniform along the address, the statistics interval should be larger than the AutoNUMA scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the initial value and max value of the hot threshold. The patch improves the score of pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution by 4.4% with 30.8% fewer NUMA page migrations on a 2 socket Intel server with Optance DC Persistent Memory. Because it improves the accuracy of the hot page selection. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 3 +++ kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 39 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a4fbd711d5e9..c10efdc92b17 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -777,6 +777,9 @@ typedef struct pglist_data { #ifdef CONFIG_NUMA_BALANCING unsigned long numa_ts; unsigned long numa_nr_candidate; + unsigned long numa_threshold_ts; + unsigned long numa_threshold_nr_candidate; + unsigned long numa_threshold; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e84ae9db7e3c..227007f2b9d7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1470,6 +1470,35 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, return true; } +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_migration_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned long ref_th) +{ + unsigned long now = jiffies, last_th_ts, th_period; + unsigned long unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max); + last_th_ts = pgdat->numa_threshold_ts; + if (now > last_th_ts + th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / 1000; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1484,19 +1513,22 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit, latency, th, def_th; pgdat = NODE_DATA(dst_nid); if (pgdat_free_space_enough(pgdat)) return true; - th = sysctl_numa_balancing_hot_threshold; + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = + sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + numa_migration_adjust_threshold(pgdat, rate_limit, def_th); + + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency > th) return false; - rate_limit = - sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); return numa_migration_check_rate_limit(pgdat, rate_limit, thp_nr_pages(page)); } From patchwork Tue Oct 27 06:32:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11859479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08263921 for ; Tue, 27 Oct 2020 06:33:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C75D32080A for ; Tue, 27 Oct 2020 06:33:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C75D32080A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 239C36B0071; Tue, 27 Oct 2020 02:33:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C34B6B0072; Tue, 27 Oct 2020 02:33:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 015D56B0073; Tue, 27 Oct 2020 02:33:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id C6CF66B0071 for ; Tue, 27 Oct 2020 02:33:13 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6C6FF181AC9C6 for ; Tue, 27 Oct 2020 06:33:13 +0000 (UTC) X-FDA: 77416738266.17.beef06_06081122727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 53377180D0180 for ; Tue, 27 Oct 2020 06:33:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30034:30054:30064,0,RBL:134.134.136.100:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04yf4y1nfwncc4iujeeo8eprx8cafocs5ettbueyudazf5b7yaeqgoqhk1n18e6.pooazf3z4mici9ndqtdykgiwod3grpqp9zf6usr8std66dqh6o83fuzgd8i4d3k.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: beef06_06081122727a X-Filterd-Recvd-Size: 3760 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 06:33:11 +0000 (UTC) IronPort-SDR: OTwwbY+/bxZqq19/xOMCbkJtLKXHXxsolZq5JZHwVTRVB4EzFhlooU3Pq0aAdIvILEh8VW8cXG AGQeuF2p06tQ== X-IronPort-AV: E=McAfee;i="6000,8403,9786"; a="232221188" X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="232221188" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:11 -0700 IronPort-SDR: c0pcvM5smIdBXL0iR+fyD6KRHnY1dCAnnMR719911lEylQPFFcxj/JP+EWrpU+/3OMPFWDldCE jSeNVGzsMwog== X-IronPort-AV: E=Sophos;i="5.77,422,1596524400"; d="scan'208";a="535666749" Received: from lzhengha-mobl.ccr.corp.intel.com (HELO yhuang-mobile.ccr.corp.intel.com) ([10.254.213.46]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2020 23:33:10 -0700 From: Huang Ying To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying Subject: [RFC -V4 6/6] autonuma, memory tiering: Add page promotion counter Date: Tue, 27 Oct 2020 14:32:17 +0800 Message-Id: <20201027063217.211096-7-ying.huang@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027063217.211096-1-ying.huang@intel.com> References: <20201027063217.211096-1-ying.huang@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To distinguish the number of promotion from the original inter-socket NUMA balancing migration. The counter is per-node (target node). This is to identify imbalance among NUMA nodes. Signed-off-by: "Huang, Ying" --- include/linux/mmzone.h | 1 + mm/migrate.c | 10 +++++++++- mm/vmstat.c | 1 + 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c10efdc92b17..b384489897b7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -209,6 +209,7 @@ enum node_stat_item { #endif #ifdef CONFIG_NUMA_BALANCING PGPROMOTE_CANDIDATE, /* candidate pages to promote */ + PGPROMOTE_SUCCESS, /* promote successfully */ #endif NR_VM_NODE_STAT_ITEMS }; diff --git a/mm/migrate.c b/mm/migrate.c index d8d80b111a6a..0f4106b6c4fe 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2161,8 +2161,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, putback_lru_page(page); } isolated = 0; - } else + } else { count_vm_numa_event(NUMA_PAGE_MIGRATE); + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + !node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, + nr_succeeded); + } BUG_ON(!list_empty(&migratepages)); return isolated; @@ -2287,6 +2292,9 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, + HPAGE_PMD_NR); return isolated; out_fail: diff --git a/mm/vmstat.c b/mm/vmstat.c index 2e60cc0232ef..ba5f6f987711 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1217,6 +1217,7 @@ const char * const vmstat_text[] = { #endif #ifdef CONFIG_NUMA_BALANCING "pgpromote_candidate", + "pgpromote_success", #endif /* enum writeback_stat_item counters */