From patchwork Fri Jan 28 08:27:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12728087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E42C433F5 for ; Fri, 28 Jan 2022 08:28:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E338C6B0072; Fri, 28 Jan 2022 03:28:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE3FC6B0073; Fri, 28 Jan 2022 03:28:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAD4A6B0074; Fri, 28 Jan 2022 03:28:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id BCBEB6B0072 for ; Fri, 28 Jan 2022 03:28:13 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 78FDC181CAEF3 for ; Fri, 28 Jan 2022 08:28:13 +0000 (UTC) X-FDA: 79079018466.07.439B31B Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf17.hostedemail.com (Postfix) with ESMTP id AD8ED40007 for ; Fri, 28 Jan 2022 08:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643358492; x=1674894492; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vbZJdK5LKiAu7te1Is/DA1saWsH5GBMr92L9qszkh5c=; b=bR+zg1EDxI/rBDFrggevmVZUePXjuKq+Rrkx5Erxl16PgaRt9TWV7iIZ hFIov17HG7MlnWuF71Yf45XG6dCVqyaJ40aDrv18gBlZyISULodekMwdR /fbvz8GJ13muAZKYtohbtbjWwNcLfsRkQ66r3LjdyZ2hhvg6BpK9XkF8/ ZLIL+BSdxQz0vHUuCdIMGU9yqH0YWn/Fb1Dj8cU4vecDucJHFRrDpl6ny QwABp18A8m5epS1BWvrRyDcUx3X4pmRZb24NQiKLQFAsUU3DDJRhSoGPL q/dQn2GVOYHC60aThyRGmHn3pvmDrgdUxZETI+00r5fqF6QorDzHfIPeE Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10240"; a="234454517" X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="234454517" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:12 -0800 X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="697019619" Received: from yhuang6-desk2.sh.intel.com ([10.239.13.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:08 -0800 From: Huang Ying To: Peter Zijlstra , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Huang Ying , Yang Shi , Baolin Wang , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Dave Hansen , Zi Yan , Wei Xu , osalvador , Shakeel Butt , zhongjiang-ali Subject: [PATCH -V11 1/3] NUMA Balancing: add page promotion counter Date: Fri, 28 Jan 2022 16:27:49 +0800 Message-Id: <20220128082751.593478-2-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220128082751.593478-1-ying.huang@intel.com> References: <20220128082751.593478-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AD8ED40007 X-Rspam-User: nil Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=bR+zg1ED; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf17.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: 4ubf4hd7rrs45ym4quuke3xadsij7j88 X-Rspamd-Server: rspam08 X-HE-Tag: 1643358492-168502 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In a system with multiple memory types, e.g. DRAM and PMEM, the CPU and DRAM in one socket will be put in one NUMA node as before, while the PMEM will be put in another NUMA node as described in the description of the commit c221c0b0308f ("device-dax: "Hotplug" persistent memory for use like normal RAM"). So, the NUMA balancing mechanism will identify all PMEM accesses as remote access and try to promote the PMEM pages to DRAM. To distinguish the number of the inter-type promoted pages from that of the inter-socket migrated pages. A new vmstat count is added. The counter is per-node (count in the target node). So this can be used to identify promotion imbalance among the NUMA nodes. Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Tested-by: Baolin Wang Reviewed-by: Baolin Wang Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: zhongjiang-ali Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Acked-by: Johannes Weiner --- include/linux/mmzone.h | 3 +++ include/linux/node.h | 5 +++++ mm/migrate.c | 13 ++++++++++--- mm/vmstat.c | 3 +++ 4 files changed, 21 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aed44e9b5d89..44bd054ca12b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -210,6 +210,9 @@ enum node_stat_item { NR_PAGETABLE, /* used for pagetables */ #ifdef CONFIG_SWAP NR_SWAPCACHE, +#endif +#ifdef CONFIG_NUMA_BALANCING + PGPROMOTE_SUCCESS, /* promote successfully */ #endif NR_VM_NODE_STAT_ITEMS }; diff --git a/include/linux/node.h b/include/linux/node.h index bb21fd631b16..81bbf1c0afd3 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg, #define to_node(device) container_of(device, struct node, dev) +static inline bool node_is_toptier(int node) +{ + return node_state(node, N_CPU); +} + #endif /* _LINUX_NODE_H_ */ diff --git a/mm/migrate.c b/mm/migrate.c index 665dbe8cad72..a5971e9f6e6a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2072,6 +2072,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, pg_data_t *pgdat = NODE_DATA(node); int isolated; int nr_remaining; + int nr_succeeded; LIST_HEAD(migratepages); new_page_t *new; bool compound; @@ -2110,7 +2111,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, *new, NULL, node, - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL); + MIGRATE_ASYNC, MR_NUMA_MISPLACED, + &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); @@ -2119,8 +2121,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, putback_lru_page(page); } isolated = 0; - } else - count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages); + } + if (nr_succeeded) { + count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, + nr_succeeded); + } BUG_ON(!list_empty(&migratepages)); return isolated; diff --git a/mm/vmstat.c b/mm/vmstat.c index 4057372745d0..846b670dd346 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1242,6 +1242,9 @@ const char * const vmstat_text[] = { #ifdef CONFIG_SWAP "nr_swapcached", #endif +#ifdef CONFIG_NUMA_BALANCING + "pgpromote_success", +#endif /* enum writeback_stat_item counters */ "nr_dirty_threshold", From patchwork Fri Jan 28 08:27:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12728088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41181C433EF for ; Fri, 28 Jan 2022 08:28:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CAD486B0073; Fri, 28 Jan 2022 03:28:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5D606B0074; Fri, 28 Jan 2022 03:28:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFD836B0075; Fri, 28 Jan 2022 03:28:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id A30996B0073 for ; Fri, 28 Jan 2022 03:28:18 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 689EB181CAEF3 for ; Fri, 28 Jan 2022 08:28:18 +0000 (UTC) X-FDA: 79079018676.04.4FC2184 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf17.hostedemail.com (Postfix) with ESMTP id 748DF40007 for ; Fri, 28 Jan 2022 08:28:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643358497; x=1674894497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WC4PhiGrPDzzmC5jvoDf7DFGko0uF8nz7i7hvigSZ+Q=; b=br5InRKE5NqSg0DtWFv69urIGvU67B5m7BdBpvCv8oNexzNxeaRLT3tL xStU0HUInYkNfxz1/uFiP+AhnnYbUY2XmJSMXv2e34BXLFURzage7nQJG 64jD7tAmdSzWtJZXIX9fcU8RVu+Hdg6S4ttQvI2eEXQGB2wnD80rci2u5 wdbDfOGQu50PNwA/MevhxpeFiRZeg/LGQ5aiDlsg6NJvw0icSASzh/u2R fo2LL5SeiRTuZvfhKL/cXjMqqP4gMlxrkFmbFx68XkWSQvsq86u3FBTYK 1WpusfyqaFknBYPrhrUCf7JAc+SkqvN9CM3GKAF2arSh8VR7GmKCye1vt w==; X-IronPort-AV: E=McAfee;i="6200,9189,10240"; a="234454533" X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="234454533" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:17 -0800 X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="697019661" Received: from yhuang6-desk2.sh.intel.com ([10.239.13.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:12 -0800 From: Huang Ying To: Peter Zijlstra , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Huang Ying , Baolin Wang , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , zhongjiang-ali Subject: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system Date: Fri, 28 Jan 2022 16:27:50 +0800 Message-Id: <20220128082751.593478-3-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220128082751.593478-1-ying.huang@intel.com> References: <20220128082751.593478-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 748DF40007 X-Rspam-User: nil Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=br5InRKE; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf17.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: poog5g9xdwom7qrxowsitbis3w4jeghh X-Rspamd-Server: rspam08 X-HE-Tag: 1643358497-917687 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With the advent of various new memory types, some machines will have multiple types of memory, e.g. DRAM and PMEM (persistent memory). The memory subsystem of these machines can be called memory tiering system, because the performance of the different types of memory are usually different. In such system, because of the memory accessing pattern changing etc, some pages in the slow memory may become hot globally. So in this patch, the NUMA balancing mechanism is enhanced to optimize the page placement among the different memory types according to hot/cold dynamically. In a typical memory tiering system, there are CPUs, fast memory and slow memory in each physical NUMA node. The CPUs and the fast memory will be put in one logical node (called fast memory node), while the slow memory will be put in another (faked) logical node (called slow memory node). That is, the fast memory is regarded as local while the slow memory is regarded as remote. So it's possible for the recently accessed pages in the slow memory node to be promoted to the fast memory node via the existing NUMA balancing mechanism. The original NUMA balancing mechanism will stop to migrate pages if the free memory of the target node becomes below the high watermark. This is a reasonable policy if there's only one memory type. But this makes the original NUMA balancing mechanism almost do not work to optimize page placement among different memory types. Details are as follows. It's the common cases that the working-set size of the workload is larger than the size of the fast memory nodes. Otherwise, it's unnecessary to use the slow memory at all. So, there are almost always no enough free pages in the fast memory nodes, so that the globally hot pages in the slow memory node cannot be promoted to the fast memory node. To solve the issue, we have 2 choices as follows, a. Ignore the free pages watermark checking when promoting hot pages from the slow memory node to the fast memory node. This will create some memory pressure in the fast memory node, thus trigger the memory reclaiming. So that, the cold pages in the fast memory node will be demoted to the slow memory node. b. Make kswapd of the fast memory node to reclaim pages until the free pages are a little (for example, high_watermark / 4) more than the high watermark. Then, if the free pages of the fast memory node reaches high watermark, and some hot pages need to be promoted, kswapd of the fast memory node will be waken up to demote more cold pages in the fast memory node to the slow memory node. This will free some extra space in the fast memory node, so the hot pages in the slow memory node can be promoted to the fast memory node. The choice "a" may create high memory pressure in the fast memory node. If the memory pressure of the workload is high, the memory pressure may become so high that the memory allocation latency of the workload is influenced, e.g. the direct reclaiming may be triggered. The choice "b" works much better at this aspect. If the memory pressure of the workload is high, the hot pages promotion will stop earlier because its allocation watermark is higher than that of the normal memory allocation. So in this patch, choice "b" is implemented. In addition to the original page placement optimization among sockets, the NUMA balancing mechanism is extended to be used to optimize page placement according to hot/cold among different memory types. So the sysctl user space interface (numa_balancing) is extended in a backward compatible way as follow, so that the users can enable/disable these functionality individually. The sysctl is converted from a Boolean value to a bits field. The definition of the flags is, - 0x0: NUMA_BALANCING_DISABLED - 0x1: NUMA_BALANCING_NORMAL - 0x2: NUMA_BALANCING_MEMORY_TIERING We have tested the patch with the pmbench memory accessing benchmark with the 80:20 read/write ratio and the Gauss access address distribution on a 2 socket Intel server with Optane DC Persistent Memory Model. The test results shows that the pmbench score can improve up to 95.9%. Signed-off-by: "Huang, Ying" Tested-by: Baolin Wang Reviewed-by: Baolin Wang Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: zhongjiang-ali Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++------- include/linux/sched/sysctl.h | 10 +++++++ kernel/sched/core.c | 21 ++++++++++++--- kernel/sysctl.c | 2 +- mm/migrate.c | 19 ++++++++++++-- mm/vmscan.c | 17 ++++++++++++ 6 files changed, 82 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index d359bcfadd39..ea32ba0c5d3c 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst). numa_balancing ============== -Enables/disables automatic page fault based NUMA memory -balancing. Memory is moved automatically to nodes -that access it often. +Enables/disables and configure automatic page fault based NUMA memory +balancing. Memory is moved automatically to nodes that access it +often. The value to set can be the result to OR the following, -Enables/disables automatic NUMA memory balancing. On NUMA machines, there -is a performance penalty if remote memory is accessed by a CPU. When this -feature is enabled the kernel samples what task thread is accessing memory -by periodically unmapping pages and later trapping a page fault. At the -time of the page fault, it is determined if the data being accessed should -be migrated to a local memory node. += ================================= +0x0 NUMA_BALANCING_DISABLED +0x1 NUMA_BALANCING_NORMAL +0x2 NUMA_BALANCING_MEMORY_TIERING += ================================= + +Or NUMA_BALANCING_NORMAL to optimize page placement among different +NUMA nodes to reduce remote accessing. On NUMA machines, there is a +performance penalty if remote memory is accessed by a CPU. When this +feature is enabled the kernel samples what task thread is accessing +memory by periodically unmapping pages and later trapping a page +fault. At the time of the page fault, it is determined if the data +being accessed should be migrated to a local memory node. The unmapping of pages and trapping faults incur additional overhead that ideally is offset by improved memory locality but there is no universal @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls. +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among +different types of memory (represented as different NUMA nodes) to +place the hot pages in the fast memory. This is implemented based on +unmapping and page fault too. numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb =============================================================================================================================== diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index c19dd5a2c05c..b5eec8854c5a 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -23,6 +23,16 @@ enum sched_tunable_scaling { SCHED_TUNABLESCALING_END, }; +#define NUMA_BALANCING_DISABLED 0x0 +#define NUMA_BALANCING_NORMAL 0x1 +#define NUMA_BALANCING_MEMORY_TIERING 0x2 + +#ifdef CONFIG_NUMA_BALANCING +extern int sysctl_numa_balancing_mode; +#else +#define sysctl_numa_balancing_mode 0 +#endif + /* * control realtime throttling: * diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 848eaa0efe0e..b8b8e5feb8ef 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4279,7 +4279,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); #ifdef CONFIG_NUMA_BALANCING -void set_numabalancing_state(bool enabled) +int sysctl_numa_balancing_mode; + +static void __set_numabalancing_state(bool enabled) { if (enabled) static_branch_enable(&sched_numa_balancing); @@ -4287,13 +4289,22 @@ void set_numabalancing_state(bool enabled) static_branch_disable(&sched_numa_balancing); } +void set_numabalancing_state(bool enabled) +{ + if (enabled) + sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL; + else + sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED; + __set_numabalancing_state(enabled); +} + #ifdef CONFIG_PROC_SYSCTL int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table t; int err; - int state = static_branch_likely(&sched_numa_balancing); + int state = sysctl_numa_balancing_mode; if (write && !capable(CAP_SYS_ADMIN)) return -EPERM; @@ -4303,8 +4314,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); if (err < 0) return err; - if (write) - set_numabalancing_state(state); + if (write) { + sysctl_numa_balancing_mode = state; + __set_numabalancing_state(state); + } return err; } #endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 5ae443b2882e..c90a564af720 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = sysctl_numa_balancing, .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE, + .extra2 = SYSCTL_FOUR, }, #endif /* CONFIG_NUMA_BALANCING */ { diff --git a/mm/migrate.c b/mm/migrate.c index a5971e9f6e6a..61f7e82e6708 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -51,6 +51,7 @@ #include #include #include +#include #include @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { int page_lru; int nr_pages = thp_nr_pages(page); + int order = compound_order(page); - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); /* Do not migrate THP mapped by multiple processes */ if (PageTransHuge(page) && total_mapcount(page) > 1) return 0; /* Avoid migrating to a node that is nearly full */ - if (!migrate_balanced_pgdat(pgdat, nr_pages)) + if (!migrate_balanced_pgdat(pgdat, nr_pages)) { + int z; + + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || + !numa_demotion_enabled) + return 0; + if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE) + return 0; + for (z = pgdat->nr_zones - 1; z >= 0; z--) { + if (populated_zone(pgdat->node_zones + z)) + break; + } + wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); return 0; + } if (isolate_lru_page(page)) return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 08ab556c7678..bc6fbbc8bedd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -56,6 +56,7 @@ #include #include +#include #include "internal.h" @@ -3966,6 +3967,13 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx) return false; } +/* + * Keep the free pages on fast memory node a little more than the high + * watermark to accommodate the promoted pages. + */ +#define NUMA_BALANCING_PROMOTE_WATERMARK_DIV 4 +#define NUMA_BALANCING_PROMOTE_WATERMARK_MIN (10UL * 1024 * 1024 >> PAGE_SHIFT) + /* * Returns true if there is an eligible zone balanced for the request order * and highest_zoneidx @@ -3987,6 +3995,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) continue; mark = high_wmark_pages(zone); + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && + numa_demotion_enabled && + next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) { + unsigned long promote_mark; + + promote_mark = max(NUMA_BALANCING_PROMOTE_WATERMARK_MIN, + mark / NUMA_BALANCING_PROMOTE_WATERMARK_DIV); + mark += promote_mark; + } if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) return true; } From patchwork Fri Jan 28 08:27:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12728089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B17ACC433F5 for ; Fri, 28 Jan 2022 08:28:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 470A26B0074; Fri, 28 Jan 2022 03:28:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F9186B0075; Fri, 28 Jan 2022 03:28:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C0F56B0078; Fri, 28 Jan 2022 03:28:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 1F2446B0074 for ; Fri, 28 Jan 2022 03:28:23 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C45EC95294 for ; Fri, 28 Jan 2022 08:28:22 +0000 (UTC) X-FDA: 79079018844.04.A92A8A3 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf17.hostedemail.com (Postfix) with ESMTP id A905A4000D for ; Fri, 28 Jan 2022 08:28:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643358501; x=1674894501; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsVWmk+4uaBVS/PgrR6ol+oGdqxEBzkJwnQwzMlxr3c=; b=ikSc3G/oxcCwCpiaX7EqgZg0OU0W6rceExYNPZBbMf9TuOHWAn9Y99mf R4oUdbOS+mdUZfn3joIUILFC3BdBFlcwa/uwA9hkItUkY6N6j0ip0C6Q1 mklScWvdYnpdGqla6rL6fiErNWH+39ZQgUnQqhiiUYJwdTlYaGePUuGAd x850o3p35w2OUO4065CnDbE+vw379ZFoSvwE3mAwtyPudq4zj+1VA2tHE SnONCoa9GVzR2qS3wJE00rkdPkGwMMQ/ay4hjs+pMOggH40ahaj/I5BqX Io8ozlHOixNJ4hfMJ12htHwQ6meWCW7aaY0bYqJoOJicuFWzvB0xn40Dq A==; X-IronPort-AV: E=McAfee;i="6200,9189,10240"; a="234454545" X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="234454545" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:21 -0800 X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="697019677" Received: from yhuang6-desk2.sh.intel.com ([10.239.13.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:17 -0800 From: Huang Ying To: Peter Zijlstra , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Huang Ying , Dave Hansen , Baolin Wang , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , zhongjiang-ali Subject: [PATCH -V11 3/3] memory tiering: skip to scan fast memory Date: Fri, 28 Jan 2022 16:27:51 +0800 Message-Id: <20220128082751.593478-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220128082751.593478-1-ying.huang@intel.com> References: <20220128082751.593478-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A905A4000D X-Rspam-User: nil Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ikSc3G/o"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf17.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: yfeq4o8i61x16heykekbaqu653wauq5c X-Rspamd-Server: rspam08 X-HE-Tag: 1643358501-69908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the NUMA balancing isn't used to optimize the page placement among sockets but only among memory types, the hot pages in the fast memory node couldn't be migrated (promoted) to anywhere. So it's unnecessary to scan the pages in the fast memory node via changing their PTE/PMD mapping to be PROT_NONE. So that the page faults could be avoided too. In the test, if only the memory tiering NUMA balancing mode is enabled, the number of the NUMA balancing hint faults for the DRAM node is reduced to almost 0 with the patch. While the benchmark score doesn't change visibly. Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Tested-by: Baolin Wang Reviewed-by: Baolin Wang Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: zhongjiang-ali Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 13 ++++++++++++- 2 files changed, 33 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..9ce126cb0cfd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, } #endif - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; + page = pmd_page(*pmd); + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(page_to_nid(page))) + goto unlock; + } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED diff --git a/mm/mprotect.c b/mm/mprotect.c index 0138dfcdb1d8..2fe03e695c81 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, */ if (prot_numa) { struct page *page; + int nid; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node == page_to_nid(page)) + nid = page_to_nid(page); + if (target_node == nid) + continue; + + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(nid)) continue; }