From patchwork Tue Jan 26 00:34:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12044993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06429C433E6 for ; Tue, 26 Jan 2021 00:41:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 89ABD230FE for ; Tue, 26 Jan 2021 00:41:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89ABD230FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 220D26B00B2; Mon, 25 Jan 2021 19:41:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A90E8D0059; Mon, 25 Jan 2021 19:41:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 071F36B00B4; Mon, 25 Jan 2021 19:41:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id E7BF76B00B2 for ; Mon, 25 Jan 2021 19:41:35 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A7608181AF5C4 for ; Tue, 26 Jan 2021 00:41:35 +0000 (UTC) X-FDA: 77746072950.13.egg80_4d078242758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 8972E18140B70 for ; Tue, 26 Jan 2021 00:41:35 +0000 (UTC) X-HE-Tag: egg80_4d078242758a X-Filterd-Recvd-Size: 6064 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:34 +0000 (UTC) IronPort-SDR: zpchlY2ssw/dbUUqmXacaL1zqNlI3aNkCWdiLYD4CBNS2z8RfHE4g0sYSohLhN8sjmUOxXAX3W M/iLUC2BKPhQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="176315512" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="176315512" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:33 -0800 IronPort-SDR: dbJq1qH5Z1fdAfEVME5ZBbW8Wwg/P8yL4tmPp6pkvnSzAqHZNGjAfed4QTeksfGw93a5tTZN69 LPwzuG6C3+GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="573875709" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga006.fm.intel.com with ESMTP; 25 Jan 2021 16:41:32 -0800 Subject: [RFC][PATCH 01/13] mm/vmscan: restore zone_reclaim_mode ABI To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,ben.widawsky@intel.com,rientjes@google.com,cl@linux.com,alex.shi@linux.alibaba.com,dwagner@suse.de,tobin@kernel.org,akpm@linux-foundation.org,ying.huang@intel.com,dan.j.williams@intel.com,cai@lca.pw,osalvador@suse.de,stable@vger.kernel.org From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:13 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003412.59594AA9@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen I went to go add a new RECLAIM_* mode for the zone_reclaim_mode sysctl. Like a good kernel developer, I also went to go update the documentation. I noticed that the bits in the documentation didn't match the bits in the #defines. The VM never explicitly checks the RECLAIM_ZONE bit. The bit is, however implicitly checked when checking 'node_reclaim_mode==0'. The RECLAIM_ZONE #define was removed in a cleanup. That, by itself is fine. But, when the bit was removed (bit 0) the _other_ bit locations also got changed. That's not OK because the bit values are documented to mean one specific thing and users surely rely on them meaning that one thing and not changing from kernel to kernel. The end result is that if someone had a script that did: sysctl vm.zone_reclaim_mode=1 This script would have gone from enalbing node reclaim for clean unmapped pages to writing out pages during node reclaim after the commit in question. That's not great. Put the bits back the way they were and add a comment so something like this is a bit harder to do again. Update the documentation to make it clear that the first bit is ignored. Signed-off-by: Dave Hansen Fixes: 648b5cf368e0 ("mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE") Reviewed-by: Ben Widawsky Acked-by: David Rientjes Acked-by: Christoph Lameter Cc: Alex Shi Cc: Daniel Wagner Cc: "Tobin C. Harding" Cc: Christoph Lameter Cc: Andrew Morton Cc: Huang Ying Cc: Dan Williams Cc: Qian Cai Cc: Daniel Wagner Cc: osalvador Cc: stable@vger.kernel.org Reviewed-by: Oscar Salvador --- Changes from v2: * Update description to indicate that bit0 was used for clean unmapped page node reclaim. --- b/Documentation/admin-guide/sysctl/vm.rst | 10 +++++----- b/mm/vmscan.c | 9 +++++++-- 2 files changed, 12 insertions(+), 7 deletions(-) diff -puN Documentation/admin-guide/sysctl/vm.rst~mm-vmscan-restore-old-zone_reclaim_mode-abi Documentation/admin-guide/sysctl/vm.rst --- a/Documentation/admin-guide/sysctl/vm.rst~mm-vmscan-restore-old-zone_reclaim_mode-abi 2021-01-25 16:23:06.048866718 -0800 +++ b/Documentation/admin-guide/sysctl/vm.rst 2021-01-25 16:23:06.056866718 -0800 @@ -978,11 +978,11 @@ that benefit from having their data cach left disabled as the caching effect is likely to be more important than data locality. -zone_reclaim may be enabled if it's known that the workload is partitioned -such that each partition fits within a NUMA node and that accessing remote -memory would cause a measurable performance reduction. The page allocator -will then reclaim easily reusable pages (those page cache pages that are -currently not used) before allocating off node pages. +Consider enabling one or more zone_reclaim mode bits if it's known that the +workload is partitioned such that each partition fits within a NUMA node +and that accessing remote memory would cause a measurable performance +reduction. The page allocator will take additional actions before +allocating off node pages. Allowing zone reclaim to write out pages stops processes that are writing large amounts of data from dirtying pages on other nodes. Zone diff -puN mm/vmscan.c~mm-vmscan-restore-old-zone_reclaim_mode-abi mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-restore-old-zone_reclaim_mode-abi 2021-01-25 16:23:06.052866718 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:06.057866718 -0800 @@ -4086,8 +4086,13 @@ module_init(kswapd_init) */ int node_reclaim_mode __read_mostly; -#define RECLAIM_WRITE (1<<0) /* Writeout pages during reclaim */ -#define RECLAIM_UNMAP (1<<1) /* Unmap pages during reclaim */ +/* + * These bit locations are exposed in the vm.zone_reclaim_mode sysctl + * ABI. New bits are OK, but existing bits can never change. + */ +#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ +#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ /* * Priority for NODE_RECLAIM. This determines the fraction of pages From patchwork Tue Jan 26 00:34:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12044997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 835ECC433E6 for ; Tue, 26 Jan 2021 00:41:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15123230FE for ; Tue, 26 Jan 2021 00:41:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15123230FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C8798D0059; Mon, 25 Jan 2021 19:41:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A08B8D0065; Mon, 25 Jan 2021 19:41:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DC818D0059; Mon, 25 Jan 2021 19:41:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 25CB28D0065 for ; Mon, 25 Jan 2021 19:41:40 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E199F1EE6 for ; Tue, 26 Jan 2021 00:41:39 +0000 (UTC) X-FDA: 77746073118.25.home16_08146532758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id BC0751804E3AD for ; Tue, 26 Jan 2021 00:41:39 +0000 (UTC) X-HE-Tag: home16_08146532758a X-Filterd-Recvd-Size: 4365 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:38 +0000 (UTC) IronPort-SDR: XtBM3apQWJxpvJ72vWhj/6XEhtFh0FHhafLUZvoWK8Wq0V51MIETEFvFlwlHI9+hWqzoNu0Xvw cQ8f7dcXTJEw== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="179973459" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="179973459" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:35 -0800 IronPort-SDR: e5hvp7TXBKd+AGfYz2vcRX3Rd7zXOPxjEAc7KkfjO13HyKnTg5K/sYXXUWvvYNzcMZ7D94JV6W drXa9uwmxMwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="577639888" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga005.fm.intel.com with ESMTP; 25 Jan 2021 16:41:35 -0800 Subject: [RFC][PATCH 02/13] mm/vmscan: move RECLAIM* bits to uapi header To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,ben.widawsky@intel.com,rientjes@google.com,cl@linux.com,alex.shi@linux.alibaba.com,dwagner@suse.de,tobin@kernel.org,akpm@linux-foundation.org,ying.huang@intel.com,dan.j.williams@intel.com,cai@lca.pw,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:15 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003415.1171FE94@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen It is currently not obvious that the RECLAIM_* bits are part of the uapi since they are defined in vmscan.c. Move them to a uapi header to make it obvious. This should have no functional impact. Signed-off-by: Dave Hansen Reviewed-by: Ben Widawsky Acked-by: David Rientjes Acked-by: Christoph Lameter Cc: Alex Shi Cc: Daniel Wagner Cc: "Tobin C. Harding" Cc: Christoph Lameter Cc: Andrew Morton Cc: Huang Ying Cc: Dan Williams Cc: Qian Cai Cc: Daniel Wagner Cc: osalvador Reviewed-by: Oscar Salvador --- Note: This is not cc'd to stable. It does not fix any bugs. --- b/include/uapi/linux/mempolicy.h | 7 +++++++ b/mm/vmscan.c | 8 -------- 2 files changed, 7 insertions(+), 8 deletions(-) diff -puN include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi include/uapi/linux/mempolicy.h --- a/include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi 2021-01-25 16:23:07.197866715 -0800 +++ b/include/uapi/linux/mempolicy.h 2021-01-25 16:23:07.203866715 -0800 @@ -62,5 +62,12 @@ enum { #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +/* + * These bit locations are exposed in the vm.zone_reclaim_mode sysctl + * ABI. New bits are OK, but existing bits can never change. + */ +#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ +#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff -puN mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi 2021-01-25 16:23:07.199866715 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:07.204866715 -0800 @@ -4087,14 +4087,6 @@ module_init(kswapd_init) int node_reclaim_mode __read_mostly; /* - * These bit locations are exposed in the vm.zone_reclaim_mode sysctl - * ABI. New bits are OK, but existing bits can never change. - */ -#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ -#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ -#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ - -/* * Priority for NODE_RECLAIM. This determines the fraction of pages * of a node considered for each zone_reclaim. 4 scans 1/16th of * a zone. From patchwork Tue Jan 26 00:34:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12044995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 283B5C433DB for ; Tue, 26 Jan 2021 00:41:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B0CD221D93 for ; Tue, 26 Jan 2021 00:41:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0CD221D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 15C618D0064; Mon, 25 Jan 2021 19:41:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10B5A8D0059; Mon, 25 Jan 2021 19:41:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0490E8D0064; Mon, 25 Jan 2021 19:41:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id E235B8D0059 for ; Mon, 25 Jan 2021 19:41:39 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A5C49180ACF0D for ; Tue, 26 Jan 2021 00:41:39 +0000 (UTC) X-FDA: 77746073118.19.party93_0716f4d2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 8425A1ACC22 for ; Tue, 26 Jan 2021 00:41:39 +0000 (UTC) X-HE-Tag: party93_0716f4d2758a X-Filterd-Recvd-Size: 4887 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:38 +0000 (UTC) IronPort-SDR: Sx7IsIzyl6KE+4kW3FJjDfNlZFrsL7RSEqHh8AdFjIRH25qWEU/BqMgOj6FVWuOBA62w8Wx9Ll 3UK7ufyDoJjg== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="264650533" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="264650533" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:37 -0800 IronPort-SDR: Ukmmbnc6CqfXFch4ltMxuPGJw/E8U/45bNuNGTcMcAPr5BRW+9xKntoI90jFmINg3J8IGQc83R bG64GajjAH0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="429496797" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga001.jf.intel.com with ESMTP; 25 Jan 2021 16:41:37 -0800 Subject: [RFC][PATCH 03/13] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,ben.widawsky@intel.com,cl@linux.com,alex.shi@linux.alibaba.com,tobin@kernel.org,akpm@linux-foundation.org,ying.huang@intel.com,dan.j.williams@intel.com,cai@lca.pw,dwagner@suse.de,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:17 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003417.72B4BCFB@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen RECLAIM_ZONE was assumed to be unused because it was never explicitly used in the kernel. However, there were a number of places where it was checked implicitly by checking 'node_reclaim_mode' for a zero value. These zero checks are not great because it is not obvious what a zero mode *means* in the code. Replace them with a helper which makes it more obvious: node_reclaim_enabled(). This helper also provides a handy place to explicitly check the RECLAIM_ZONE bit itself. Check it explicitly there to make it more obvious where the bit can affect behavior. This should have no functional impact. Signed-off-by: Dave Hansen Reviewed-by: Ben Widawsky Acked-by: Christoph Lameter Cc: Alex Shi Cc: "Tobin C. Harding" Cc: Christoph Lameter Cc: Andrew Morton Cc: Huang Ying Cc: Dan Williams Cc: Qian Cai Cc: Daniel Wagner Cc: osalvador Acked-by: David Rientjes Reviewed-by: Oscar Salvador --- Note: This is not cc'd to stable. It does not fix any bugs. --- b/include/linux/swap.h | 7 +++++++ b/mm/khugepaged.c | 2 +- b/mm/page_alloc.c | 2 +- 3 files changed, 9 insertions(+), 2 deletions(-) diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h --- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2021-01-25 16:23:08.330866712 -0800 +++ b/include/linux/swap.h 2021-01-25 16:23:08.339866712 -0800 @@ -12,6 +12,7 @@ #include #include #include +#include #include struct notifier_block; @@ -380,6 +381,12 @@ extern int sysctl_min_slab_ratio; #define node_reclaim_mode 0 #endif +static inline bool node_reclaim_enabled(void) +{ + /* Is any node_reclaim_mode bit set? */ + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); +} + extern void check_move_unevictable_pages(struct pagevec *pvec); extern int kswapd_run(int nid); diff -puN mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper mm/khugepaged.c --- a/mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper 2021-01-25 16:23:08.332866712 -0800 +++ b/mm/khugepaged.c 2021-01-25 16:23:08.340866712 -0800 @@ -797,7 +797,7 @@ static bool khugepaged_scan_abort(int ni * If node_reclaim_mode is disabled, then no extra effort is made to * allocate memory locally. */ - if (!node_reclaim_mode) + if (!node_reclaim_enabled()) return false; /* If there is a count for this node already, it must be acceptable */ diff -puN mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper mm/page_alloc.c --- a/mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper 2021-01-25 16:23:08.335866712 -0800 +++ b/mm/page_alloc.c 2021-01-25 16:23:08.342866712 -0800 @@ -3875,7 +3875,7 @@ retry: if (alloc_flags & ALLOC_NO_WATERMARKS) goto try_this_zone; - if (node_reclaim_mode == 0 || + if (!node_reclaim_enabled() || !zone_allows_reclaim(ac->preferred_zoneref->zone, zone)) continue; From patchwork Tue Jan 26 00:34:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12044999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8047C433E0 for ; Tue, 26 Jan 2021 00:41:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3218D21D93 for ; Tue, 26 Jan 2021 00:41:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3218D21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 889DF8D0066; Mon, 25 Jan 2021 19:41:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 839BF8D0065; Mon, 25 Jan 2021 19:41:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DE828D0066; Mon, 25 Jan 2021 19:41:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id 50B048D0065 for ; Mon, 25 Jan 2021 19:41:42 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1BB63181AF5C4 for ; Tue, 26 Jan 2021 00:41:42 +0000 (UTC) X-FDA: 77746073244.08.yard82_58018192758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id F2B891819E624 for ; Tue, 26 Jan 2021 00:41:41 +0000 (UTC) X-HE-Tag: yard82_58018192758a X-Filterd-Recvd-Size: 3892 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:40 +0000 (UTC) IronPort-SDR: MRk59CLhOtCxgpLslIXI8w3h6zESU4RDRtZNd8m4zmVUU80mupPosVVxf/tUA72APeX3Fq+YIA 4bY4+QjMP5vA== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="159603659" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="159603659" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:39 -0800 IronPort-SDR: hOJfIs/C4PU8o2XSx/YV0eVbYfsSWcLbCKOxKOfY8Xe4I5nKGX89Oj08ePyqrk4jaGgPj3LsCi GfLcZQl2rr9w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="472556473" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga001.fm.intel.com with ESMTP; 25 Jan 2021 16:41:39 -0800 Subject: [RFC][PATCH 04/13] mm/numa: node demotion data structure and lookup To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:19 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003419.43281680@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Prepare for the kernel to auto-migrate pages to other memory nodes with a user defined node migration table. This allows creating single migration target for each NUMA node to enable the kernel to do NUMA page migrations instead of simply reclaiming colder pages. A node with no target is a "terminal node", so reclaim acts normally there. The migration target does not fundamentally _need_ to be a single node, but this implementation starts there to limit complexity. If you consider the migration path as a graph, cycles (loops) in the graph are disallowed. This avoids wasting resources by constantly migrating (A->B, B->A, A->B ...). The expectation is that cycles will never be allowed. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- changes in July 2020: - Remove loop from next_demotion_node() and get_online_mems(). This means that the node returned by next_demotion_node() might now be offline, but the worst case is that the allocation fails. That's fine since it is transient. --- b/mm/migrate.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff -puN mm/migrate.c~0006-node-Define-and-export-memory-migration-path mm/migrate.c --- a/mm/migrate.c~0006-node-Define-and-export-memory-migration-path 2021-01-25 16:23:09.553866709 -0800 +++ b/mm/migrate.c 2021-01-25 16:23:09.558866709 -0800 @@ -1161,6 +1161,22 @@ out: return rc; } +static int node_demotion[MAX_NUMNODES] = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; + +/** + * next_demotion_node() - Get the next node in the demotion path + * @node: The starting node to lookup the next node + * + * @returns: node id for next memory node in the demotion path hierarchy + * from @node; NUMA_NO_NODE if @node is terminal. This does not keep + * @node online or guarantee that it *continues* to be the next demotion + * target. + */ +int next_demotion_node(int node) +{ + return node_demotion[node]; +} + /* * Obtain the lock on page, remove all ptes and migrate the page * to the newly allocated page in newpage. From patchwork Tue Jan 26 00:34:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87026C433DB for ; Tue, 26 Jan 2021 00:41:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 24DAF21D93 for ; Tue, 26 Jan 2021 00:41:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24DAF21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 296488D0067; Mon, 25 Jan 2021 19:41:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F6B08D0065; Mon, 25 Jan 2021 19:41:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EAF18D0067; Mon, 25 Jan 2021 19:41:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id DC55A8D0065 for ; Mon, 25 Jan 2021 19:41:42 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A66341EE6 for ; Tue, 26 Jan 2021 00:41:42 +0000 (UTC) X-FDA: 77746073244.17.cord59_0a0d64d2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 8BF8D180D0184 for ; Tue, 26 Jan 2021 00:41:42 +0000 (UTC) X-HE-Tag: cord59_0a0d64d2758a X-Filterd-Recvd-Size: 9708 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:41 +0000 (UTC) IronPort-SDR: jqkjr9uGuD0H75JKrf/CSfZ2f+LY9y+4/0JLt6skfEcZQz15gcyaBlF34ISRdhT+OGJPrVblAO Qj6YNMkfR5mg== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="264650536" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="264650536" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:41 -0800 IronPort-SDR: fc/X8K9y05Pr1mhru+giHLxQVW8LAF42oFEoqlw70j5Iu/qt8MTDikaqhL4RrLaD1AGFYvTxKw iIivcob160Uw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="368924671" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2021 16:41:41 -0800 Subject: [RFC][PATCH 05/13] mm/numa: automatically generate node migration order To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:21 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003421.45897BF4@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen When memory fills up on a node, memory contents can be automatically migrated to another node. The biggest problems are knowing when to migrate and to where the migration should be targeted. The most straightforward way to generate the "to where" list would be to follow the page allocator fallback lists. Those lists already tell us if memory is full where to look next. It would also be logical to move memory in that order. But, the allocator fallback lists have a fatal flaw: most nodes appear in all the lists. This would potentially lead to migration cycles (A->B, B->A, A->B, ...). Instead of using the allocator fallback lists directly, keep a separate node migration ordering. But, reuse the same data used to generate page allocator fallback in the first place: find_next_best_node(). This means that the firmware data used to populate node distances essentially dictates the ordering for now. It should also be architecture-neutral since all NUMA architectures have a working find_next_best_node(). The protocol for node_demotion[] access and writing is not standard. It has no specific locking and is intended to be read locklessly. Readers must take care to avoid observing changes that appear incoherent. This was done so that node_demotion[] locking has no chance of becoming a bottleneck on large systems with lots of CPUs in direct reclaim. This code is unused for now. It will be called later in the series. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/internal.h | 5 + b/mm/migrate.c | 137 +++++++++++++++++++++++++++++++++++++++++++++++++++++- b/mm/page_alloc.c | 2 3 files changed, 142 insertions(+), 2 deletions(-) diff -puN mm/internal.h~auto-setup-default-migration-path-from-firmware mm/internal.h --- a/mm/internal.h~auto-setup-default-migration-path-from-firmware 2021-01-25 16:23:10.607866706 -0800 +++ b/mm/internal.h 2021-01-25 16:23:10.616866706 -0800 @@ -515,12 +515,17 @@ static inline void mminit_validate_memmo #ifdef CONFIG_NUMA extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); +extern int find_next_best_node(int node, nodemask_t *used_node_mask); #else static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, unsigned int order) { return NODE_RECLAIM_NOSCAN; } +static inline int find_next_best_node(int node, nodemask_t *used_node_mask) +{ + return NUMA_NO_NODE; +} #endif extern int hwpoison_filter(struct page *p); diff -puN mm/migrate.c~auto-setup-default-migration-path-from-firmware mm/migrate.c --- a/mm/migrate.c~auto-setup-default-migration-path-from-firmware 2021-01-25 16:23:10.609866706 -0800 +++ b/mm/migrate.c 2021-01-25 16:23:10.617866706 -0800 @@ -1161,6 +1161,10 @@ out: return rc; } +/* + * Writes to this array occur without locking. READ_ONCE() + * is recommended for readers to ensure consistent reads. + */ static int node_demotion[MAX_NUMNODES] = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; /** @@ -1174,7 +1178,13 @@ static int node_demotion[MAX_NUMNODES] = */ int next_demotion_node(int node) { - return node_demotion[node]; + /* + * node_demotion[] is updated without excluding + * this function from running. READ_ONCE() avoids + * reading multiple, inconsistent 'node' values + * during an update. + */ + return READ_ONCE(node_demotion[node]); } /* @@ -3124,3 +3134,128 @@ void migrate_vma_finalize(struct migrate } EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ + +/* Disable reclaim-based migration. */ +static void disable_all_migrate_targets(void) +{ + int node; + + for_each_online_node(node) + node_demotion[node] = NUMA_NO_NODE; +} + +/* + * Find an automatic demotion target for 'node'. + * Failing here is OK. It might just indicate + * being at the end of a chain. + */ +static int establish_migrate_target(int node, nodemask_t *used) +{ + int migration_target; + + /* + * Can not set a migration target on a + * node with it already set. + * + * No need for READ_ONCE() here since this + * in the write path for node_demotion[]. + * This should be the only thread writing. + */ + if (node_demotion[node] != NUMA_NO_NODE) + return NUMA_NO_NODE; + + migration_target = find_next_best_node(node, used); + if (migration_target == NUMA_NO_NODE) + return NUMA_NO_NODE; + + node_demotion[node] = migration_target; + + return migration_target; +} + +/* + * When memory fills up on a node, memory contents can be + * automatically migrated to another node instead of + * discarded at reclaim. + * + * Establish a "migration path" which will start at nodes + * with CPUs and will follow the priorities used to build the + * page allocator zonelists. + * + * The difference here is that cycles must be avoided. If + * node0 migrates to node1, then neither node1, nor anything + * node1 migrates to can migrate to node0. + * + * This function can run simultaneously with readers of + * node_demotion[]. However, it can not run simultaneously + * with itself. Exclusion is provided by memory hotplug events + * being single-threaded. + */ +void __set_migration_target_nodes(void) +{ + nodemask_t next_pass = NODE_MASK_NONE; + nodemask_t this_pass = NODE_MASK_NONE; + nodemask_t used_targets = NODE_MASK_NONE; + int node; + + /* + * Avoid any oddities like cycles that could occur + * from changes in the topology. This will leave + * a momentary gap when migration is disabled. + */ + disable_all_migrate_targets(); + + /* + * Ensure that the "disable" is visible across the system. + * Readers will see either a combination of before+disable + * state or disable+after. They will never see before and + * after state together. + * + * The before+after state together might have cycles and + * could cause readers to do things like loop until this + * function finishes. This ensures they can only see a + * single "bad" read and would, for instance, only loop + * once. + */ + smp_wmb(); + + /* + * Allocations go close to CPUs, first. Assume that + * the migration path starts at the nodes with CPUs. + */ + next_pass = node_states[N_CPU]; +again: + this_pass = next_pass; + next_pass = NODE_MASK_NONE; + /* + * To avoid cycles in the migration "graph", ensure + * that migration sources are not future targets by + * setting them in 'used_targets'. Do this only + * once per pass so that multiple source nodes can + * share a target node. + * + * 'used_targets' will become unavailable in future + * passes. This limits some opportunities for + * multiple source nodes to share a desintation. + */ + nodes_or(used_targets, used_targets, this_pass); + for_each_node_mask(node, this_pass) { + int target_node = establish_migrate_target(node, &used_targets); + + if (target_node == NUMA_NO_NODE) + continue; + + /* Visit targets from this pass in the next pass: */ + node_set(target_node, next_pass); + } + /* Is another pass necessary? */ + if (!nodes_empty(next_pass)) + goto again; +} + +void set_migration_target_nodes(void) +{ + get_online_mems(); + __set_migration_target_nodes(); + put_online_mems(); +} diff -puN mm/page_alloc.c~auto-setup-default-migration-path-from-firmware mm/page_alloc.c --- a/mm/page_alloc.c~auto-setup-default-migration-path-from-firmware 2021-01-25 16:23:10.612866706 -0800 +++ b/mm/page_alloc.c 2021-01-25 16:23:10.619866706 -0800 @@ -5704,7 +5704,7 @@ static int node_load[MAX_NUMNODES]; * * Return: node id of the found node or %NUMA_NO_NODE if no node is found. */ -static int find_next_best_node(int node, nodemask_t *used_node_mask) +int find_next_best_node(int node, nodemask_t *used_node_mask) { int n, val; int min_val = INT_MAX; From patchwork Tue Jan 26 00:34:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86F84C433E0 for ; Tue, 26 Jan 2021 00:41:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3925F21D93 for ; Tue, 26 Jan 2021 00:41:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3925F21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F13B8D0068; Mon, 25 Jan 2021 19:41:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9712B8D0069; Mon, 25 Jan 2021 19:41:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79CC18D0068; Mon, 25 Jan 2021 19:41:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 575158D0065 for ; Mon, 25 Jan 2021 19:41:47 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 287C01EE6 for ; Tue, 26 Jan 2021 00:41:47 +0000 (UTC) X-FDA: 77746073454.27.join48_130ac412758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id F27F13D668 for ; Tue, 26 Jan 2021 00:41:46 +0000 (UTC) X-HE-Tag: join48_130ac412758a X-Filterd-Recvd-Size: 6525 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:46 +0000 (UTC) IronPort-SDR: OXZiYEQhPrzNSM3ULfd6c5I1bovRCIGkf87b2jAjWzEPU3Qm0jcwrtMXZO2oow1kQ9dMbRU148 ixGSUtdi3slg== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="167500496" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="167500496" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:44 -0800 IronPort-SDR: PjZAmHZ3H+ri1AAuXp0LSzLrLXT8f16sbk3mCFfOmBkY5c25vEFmYlggoQPl7hGhpma7s9aiGO 4s4wgQU4meEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="353265040" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga003.jf.intel.com with ESMTP; 25 Jan 2021 16:41:42 -0800 Subject: [RFC][PATCH 06/13] mm/migrate: update migration order during on hotplug events To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:23 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003423.8D2B5637@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Reclaim-based migration is attempting to optimize data placement in memory based on the system topology. If the system changes, so must the migration ordering. The implementation here is pretty simple and entirely unoptimized. On any memory or CPU hotplug events, assume that a node was added or removed and recalculate all migration targets. This ensures that the node_demotion[] array is always ready to be used in case the new reclaim mode is enabled. This recalculation is far from optimal, most glaringly that it does not even attempt to figure out if nodes are actually coming or going. But, given the expected paucity of hotplug events, this should be fine. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/migrate.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 95 insertions(+), 2 deletions(-) diff -puN mm/migrate.c~enable-numa-demotion mm/migrate.c --- a/mm/migrate.c~enable-numa-demotion 2021-01-25 16:23:11.850866703 -0800 +++ b/mm/migrate.c 2021-01-25 16:23:11.855866703 -0800 @@ -49,6 +49,7 @@ #include #include #include +#include #include @@ -3135,6 +3136,7 @@ void migrate_vma_finalize(struct migrate EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ +#if defined(CONFIG_MEMORY_HOTPLUG) /* Disable reclaim-based migration. */ static void disable_all_migrate_targets(void) { @@ -3191,7 +3193,7 @@ static int establish_migrate_target(int * with itself. Exclusion is provided by memory hotplug events * being single-threaded. */ -void __set_migration_target_nodes(void) +static void __set_migration_target_nodes(void) { nodemask_t next_pass = NODE_MASK_NONE; nodemask_t this_pass = NODE_MASK_NONE; @@ -3253,9 +3255,100 @@ again: goto again; } -void set_migration_target_nodes(void) +/* + * For callers that do not hold get_online_mems() already. + */ +static void set_migration_target_nodes(void) { get_online_mems(); __set_migration_target_nodes(); put_online_mems(); } + +/* + * React to hotplug events that might affect the migration targes + * like events that online or offline NUMA nodes. + * + * The ordering is also currently dependent on which nodes have + * CPUs. That means we need CPU on/offline notification too. + */ +static int migration_online_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +static int migration_offline_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +/* + * This leaves migrate-on-reclaim transiently disabled + * between the MEM_GOING_OFFLINE and MEM_OFFLINE events. + * This runs reclaim-based micgration is enabled or not. + * This ensures that the user can turn reclaim-based + * migration at any time without needing to recalcuate + * migration targets. + * + * These callbacks already hold get_online_mems(). That + * is why __set_migration_target_nodes() can be used as + * opposed to set_migration_target_nodes(). + */ +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, + unsigned long action, void *arg) +{ + switch (action) { + case MEM_GOING_OFFLINE: + /* + * Make sure there are not transient states where + * an offline node is a migration target. This + * will leave migration disabled until the offline + * completes and the MEM_OFFLINE case below runs. + */ + disable_all_migrate_targets(); + break; + case MEM_OFFLINE: + case MEM_ONLINE: + /* + * Recalculate the target nodes once the node + * reaches its final state (online or offline). + */ + __set_migration_target_nodes(); + break; + case MEM_CANCEL_OFFLINE: + /* + * MEM_GOING_OFFLINE disabled all the migration + * targets. Reenable them. + */ + __set_migration_target_nodes(); + break; + case MEM_GOING_ONLINE: + case MEM_CANCEL_ONLINE: + break; + } + + return notifier_from_errno(0); +} + +static int __init migrate_on_reclaim_init(void) +{ + int ret; + + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim", + migration_online_cpu, + migration_offline_cpu); + /* + * In the unlikely case that this fails, the automatic + * migration targets may become suboptimal for nodes + * where N_CPU changes. With such a small impact in a + * rare case, do not bother trying to do anything special. + */ + WARN_ON(ret < 0); + + hotplug_memory_notifier(migrate_on_reclaim_callback, 100); + return 0; +} +late_initcall(migrate_on_reclaim_init); +#endif /* CONFIG_MEMORY_HOTPLUG */ From patchwork Tue Jan 26 00:34:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9C58C433E6 for ; Tue, 26 Jan 2021 00:41:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C76921D93 for ; Tue, 26 Jan 2021 00:41:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C76921D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE0498D0069; Mon, 25 Jan 2021 19:41:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D93D78D0065; Mon, 25 Jan 2021 19:41:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAE178D006A; Mon, 25 Jan 2021 19:41:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 879C98D0065 for ; Mon, 25 Jan 2021 19:41:47 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 450333642 for ; Tue, 26 Jan 2021 00:41:47 +0000 (UTC) X-FDA: 77746073454.14.dolls29_60047b92758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 2562E18229818 for ; Tue, 26 Jan 2021 00:41:47 +0000 (UTC) X-HE-Tag: dolls29_60047b92758a X-Filterd-Recvd-Size: 13789 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:45 +0000 (UTC) IronPort-SDR: H3Rv6A3r2fDm6v1UH4MsjMw9m5MC0On9MNj1ERJoLhHCY2VzkHopG5LwCO0LyNbezKQBc80sYC ZyrwNKpzFOsg== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="159603672" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="159603672" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:45 -0800 IronPort-SDR: 2D6hzaYzNeix4aXG0vjo4bfLR6rV4dbyAABH9qErRl+o/h/m6KqoOPU+NeuJbo/vvApMGSB5n9 oA5WoE5CxSUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="409948383" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by FMSMGA003.fm.intel.com with ESMTP; 25 Jan 2021 16:41:45 -0800 Subject: [RFC][PATCH 07/13] mm/migrate: make migrate_pages() return nr_succeeded To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:25 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003425.038B4812@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi The migrate_pages() returns the number of pages that were not migrated, or an error code. When returning an error code, there is no way to know how many pages were migrated or not migrated. In the following patch, migrate_pages() is used to demote pages to PMEM node, we need account how many pages are reclaimed (demoted) since page reclaim behavior depends on this. Add *nr_succeeded parameter to make migrate_pages() return how many pages are demoted successfully for all cases. Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/include/linux/migrate.h | 5 +++-- b/mm/compaction.c | 3 ++- b/mm/gup.c | 4 +++- b/mm/memory-failure.c | 4 +++- b/mm/memory_hotplug.c | 4 +++- b/mm/mempolicy.c | 8 ++++++-- b/mm/migrate.c | 17 ++++++++++------- b/mm/page_alloc.c | 9 ++++++--- 8 files changed, 36 insertions(+), 18 deletions(-) diff -puN include/linux/migrate.h~migrate_pages-add-success-return include/linux/migrate.h --- a/include/linux/migrate.h~migrate_pages-add-success-return 2021-01-25 16:23:12.931866701 -0800 +++ b/include/linux/migrate.h 2021-01-25 16:23:12.954866701 -0800 @@ -40,7 +40,8 @@ extern int migrate_page(struct address_s struct page *newpage, struct page *page, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, - unsigned long private, enum migrate_mode mode, int reason); + unsigned long private, enum migrate_mode mode, int reason, + unsigned int *nr_succeeded); extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -58,7 +59,7 @@ extern int migrate_page_move_mapping(str static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, - int reason) + int reason, unsigned int *nr_succeeded) { return -ENOSYS; } static inline struct page *alloc_migration_target(struct page *page, unsigned long private) diff -puN mm/compaction.c~migrate_pages-add-success-return mm/compaction.c --- a/mm/compaction.c~migrate_pages-add-success-return 2021-01-25 16:23:12.933866701 -0800 +++ b/mm/compaction.c 2021-01-25 16:23:12.956866701 -0800 @@ -2199,6 +2199,7 @@ compact_zone(struct compact_control *cc, unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; bool update_cached; + unsigned int nr_succeeded = 0; /* * These counters track activities during zone compaction. Initialize @@ -2317,7 +2318,7 @@ compact_zone(struct compact_control *cc, err = migrate_pages(&cc->migratepages, compaction_alloc, compaction_free, (unsigned long)cc, cc->mode, - MR_COMPACTION); + MR_COMPACTION, &nr_succeeded); trace_mm_compaction_migratepages(cc->nr_migratepages, err, &cc->migratepages); diff -puN mm/gup.c~migrate_pages-add-success-return mm/gup.c --- a/mm/gup.c~migrate_pages-add-success-return 2021-01-25 16:23:12.935866701 -0800 +++ b/mm/gup.c 2021-01-25 16:23:12.957866701 -0800 @@ -1599,6 +1599,7 @@ static long check_and_migrate_cma_pages( unsigned long step; bool drain_allow = true; bool migrate_allow = true; + unsigned int nr_succeeded = 0; LIST_HEAD(cma_page_list); long ret = nr_pages; struct migration_target_control mtc = { @@ -1654,7 +1655,8 @@ check_again: put_page(pages[i]); if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { + (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE, + &nr_succeeded)) { /* * some of the pages failed migration. Do get_user_pages * without migration. diff -puN mm/memory-failure.c~migrate_pages-add-success-return mm/memory-failure.c --- a/mm/memory-failure.c~migrate_pages-add-success-return 2021-01-25 16:23:12.939866701 -0800 +++ b/mm/memory-failure.c 2021-01-25 16:23:12.959866701 -0800 @@ -1783,6 +1783,7 @@ static int __soft_offline_page(struct pa unsigned long pfn = page_to_pfn(page); struct page *hpage = compound_head(page); char const *msg_page[] = {"page", "hugepage"}; + unsigned int nr_succeeded = 0; bool huge = PageHuge(page); LIST_HEAD(pagelist); struct migration_target_control mtc = { @@ -1826,7 +1827,8 @@ static int __soft_offline_page(struct pa if (isolate_page(hpage, &pagelist)) { ret = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, + &nr_succeeded); if (!ret) { bool release = !huge; diff -puN mm/memory_hotplug.c~migrate_pages-add-success-return mm/memory_hotplug.c --- a/mm/memory_hotplug.c~migrate_pages-add-success-return 2021-01-25 16:23:12.941866701 -0800 +++ b/mm/memory_hotplug.c 2021-01-25 16:23:12.959866701 -0800 @@ -1278,6 +1278,7 @@ do_migrate_range(unsigned long start_pfn unsigned long pfn; struct page *page, *head; int ret = 0; + unsigned int nr_succeeded = 0; LIST_HEAD(source); for (pfn = start_pfn; pfn < end_pfn; pfn++) { @@ -1352,7 +1353,8 @@ do_migrate_range(unsigned long start_pfn if (nodes_empty(nmask)) node_set(mtc.nid, nmask); ret = migrate_pages(&source, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, + &nr_succeeded); if (ret) { list_for_each_entry(page, &source, lru) { pr_warn("migrating pfn %lx failed ret:%d ", diff -puN mm/mempolicy.c~migrate_pages-add-success-return mm/mempolicy.c --- a/mm/mempolicy.c~migrate_pages-add-success-return 2021-01-25 16:23:12.944866701 -0800 +++ b/mm/mempolicy.c 2021-01-25 16:23:12.960866701 -0800 @@ -1071,6 +1071,7 @@ static int migrate_page_add(struct page static int migrate_to_node(struct mm_struct *mm, int source, int dest, int flags) { + unsigned int nr_succeeded = 0; nodemask_t nmask; LIST_HEAD(pagelist); int err = 0; @@ -1093,7 +1094,8 @@ static int migrate_to_node(struct mm_str if (!list_empty(&pagelist)) { err = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(&pagelist); } @@ -1270,6 +1272,7 @@ static long do_mbind(unsigned long start nodemask_t *nmask, unsigned long flags) { struct mm_struct *mm = current->mm; + unsigned int nr_succeeded = 0; struct mempolicy *new; unsigned long end; int err; @@ -1349,7 +1352,8 @@ static long do_mbind(unsigned long start if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); nr_failed = migrate_pages(&pagelist, new_page, NULL, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, + &nr_succeeded); if (nr_failed) putback_movable_pages(&pagelist); } diff -puN mm/migrate.c~migrate_pages-add-success-return mm/migrate.c --- a/mm/migrate.c~migrate_pages-add-success-return 2021-01-25 16:23:12.947866701 -0800 +++ b/mm/migrate.c 2021-01-25 16:23:12.964866701 -0800 @@ -1432,6 +1432,7 @@ out: * @mode: The migration mode that specifies the constraints for * page migration, if any. * @reason: The reason for page migration. + * @nr_succeeded: The number of pages migrated successfully. * * The function returns after 10 attempts or if no pages are movable any more * because the list has become empty or no retryable pages exist any more. @@ -1442,12 +1443,11 @@ out: */ int migrate_pages(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, - enum migrate_mode mode, int reason) + enum migrate_mode mode, int reason, unsigned int *nr_succeeded) { int retry = 1; int thp_retry = 1; int nr_failed = 0; - int nr_succeeded = 0; int nr_thp_succeeded = 0; int nr_thp_failed = 0; int nr_thp_split = 0; @@ -1527,7 +1527,7 @@ retry: nr_succeeded += nr_subpages; break; } - nr_succeeded++; + (*nr_succeeded)++; break; default: /* @@ -1550,12 +1550,12 @@ retry: nr_thp_failed += thp_retry; rc = nr_failed; out: - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); + count_vm_events(PGMIGRATE_SUCCESS, *nr_succeeded); count_vm_events(PGMIGRATE_FAIL, nr_failed); count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); - trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded, + trace_mm_migrate_pages(*nr_succeeded, nr_failed, nr_thp_succeeded, nr_thp_failed, nr_thp_split, mode, reason); if (!swapwrite) @@ -1623,6 +1623,7 @@ static int store_status(int __user *stat static int do_move_pages_to_node(struct mm_struct *mm, struct list_head *pagelist, int node) { + unsigned int nr_succeeded = 0; int err; struct migration_target_control mtc = { .nid = node, @@ -1630,7 +1631,8 @@ static int do_move_pages_to_node(struct }; err = migrate_pages(pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(pagelist); return err; @@ -2103,6 +2105,7 @@ int migrate_misplaced_page(struct page * pg_data_t *pgdat = NODE_DATA(node); int isolated; int nr_remaining; + unsigned int nr_succeeded = 0; LIST_HEAD(migratepages); /* @@ -2127,7 +2130,7 @@ int migrate_misplaced_page(struct page * list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page, NULL, node, MIGRATE_ASYNC, - MR_NUMA_MISPLACED); + MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); diff -puN mm/page_alloc.c~migrate_pages-add-success-return mm/page_alloc.c --- a/mm/page_alloc.c~migrate_pages-add-success-return 2021-01-25 16:23:12.950866701 -0800 +++ b/mm/page_alloc.c 2021-01-25 16:23:12.968866701 -0800 @@ -8401,7 +8401,8 @@ static unsigned long pfn_max_align_up(un /* [start, end) must belong to a single zone. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + unsigned int *nr_succeeded) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -8439,7 +8440,8 @@ static int __alloc_contig_migrate_range( cc->nr_migratepages -= nr_reclaimed; ret = migrate_pages(&cc->migratepages, alloc_migration_target, - NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE); + NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE, + nr_succeeded); } if (ret < 0) { putback_movable_pages(&cc->migratepages); @@ -8475,6 +8477,7 @@ int alloc_contig_range(unsigned long sta unsigned long outer_start, outer_end; unsigned int order; int ret = 0; + unsigned int nr_succeeded = 0; struct compact_control cc = { .nr_migratepages = 0, @@ -8527,7 +8530,7 @@ int alloc_contig_range(unsigned long sta * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end); + ret = __alloc_contig_migrate_range(&cc, start, end, &nr_succeeded); if (ret && ret != -EBUSY) goto done; ret =0; From patchwork Tue Jan 26 00:34:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBAD2C433DB for ; Tue, 26 Jan 2021 00:41:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0D9C21D93 for ; Tue, 26 Jan 2021 00:41:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0D9C21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35A008D006A; Mon, 25 Jan 2021 19:41:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E0818D0065; Mon, 25 Jan 2021 19:41:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AC6A8D006A; Mon, 25 Jan 2021 19:41:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id EE7F88D0065 for ; Mon, 25 Jan 2021 19:41:49 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B1D13181AF5C4 for ; Tue, 26 Jan 2021 00:41:49 +0000 (UTC) X-FDA: 77746073538.08.sun03_551250a2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 8F51C1819E772 for ; Tue, 26 Jan 2021 00:41:49 +0000 (UTC) X-HE-Tag: sun03_551250a2758a X-Filterd-Recvd-Size: 8860 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:48 +0000 (UTC) IronPort-SDR: HCwN1PX0T+kRg/vCl4pIhdscO64jytq6NBYRce/X1jOHAMnOS/bKtxzjobR7E5L0uhsH7xvW7+ HfrUAzYmOaOg== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="179909827" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="179909827" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:47 -0800 IronPort-SDR: pDTFMx1YQamPTXHrxd1t8PwyxWHCu4WdYQwYL1A9hKcF13wplmrfxaaX9avXrLOTNwL3pnfvti lpOYisCtb7Gg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="356532776" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga006.jf.intel.com with ESMTP; 25 Jan 2021 16:41:47 -0800 Subject: [RFC][PATCH 08/13] mm/migrate: demote pages during reclaim To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:27 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003427.73DFDD34@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen This is mostly derived from a patch from Yang Shi: https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ Add code to the reclaim path (shrink_page_list()) to "demote" data to another NUMA node instead of discarding the data. This always avoids the cost of I/O needed to read the page back in and sometimes avoids the writeout cost when the pagee is dirty. A second pass through shrink_page_list() will be made if any demotions fail. This essentally falls back to normal reclaim behavior in the case that demotions fail. Previous versions of this patch may have simply failed to reclaim pages which were eligible for demotion but were unable to be demoted in practice. Note: This just adds the start of infratructure for migration. It is actually disabled next to the FIXME in migrate_demote_page_ok(). Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: osalvador --- changes from 202010: * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define * make migrate_demote_page_ok() static, remove 'sc' arg until later patch * remove unnecessary alloc_demote_page() hugetlb warning * Simplify alloc_demote_page() gfp mask. Depend on __GFP_NORETRY to make it lightweight instead of fancier stuff like leaving out __GFP_IO/FS. * Allocate migration page with alloc_migration_target() instead of allocating directly. changes from 20200730: * Add another pass through shrink_page_list() when demotion fails. --- b/include/linux/migrate.h | 9 ++++ b/include/trace/events/migrate.h | 3 - b/mm/vmscan.c | 81 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+), 1 deletion(-) diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h --- a/include/linux/migrate.h~demote-with-migrate_pages 2021-01-25 16:23:14.591866696 -0800 +++ b/include/linux/migrate.h 2021-01-25 16:23:14.599866696 -0800 @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_DEMOTION, MR_TYPES }; @@ -196,6 +197,14 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int next_demotion_node(int node); + +#else /* CONFIG_MIGRATION disabled: */ + +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} #endif /* CONFIG_MIGRATION */ diff -puN include/trace/events/migrate.h~demote-with-migrate_pages include/trace/events/migrate.h --- a/include/trace/events/migrate.h~demote-with-migrate_pages 2021-01-25 16:23:14.593866696 -0800 +++ b/include/trace/events/migrate.h 2021-01-25 16:23:14.599866696 -0800 @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_DEMOTION, "demotion") /* * First define the enums in the above macros to be exported to userspace diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c --- a/mm/vmscan.c~demote-with-migrate_pages 2021-01-25 16:23:14.595866696 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:14.601866696 -0800 @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -1036,6 +1037,24 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } +static bool migrate_demote_page_ok(struct page *page) +{ + int next_nid = next_demotion_node(page_to_nid(page)); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageHuge(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (next_nid == NUMA_NO_NODE) + return false; + if (PageTransHuge(page) && !thp_migration_supported()) + return false; + + // FIXME: actually enable this later in the series + return false; +} + + /* Check if a page is dirty or under writeback */ static void page_check_dirty_writeback(struct page *page, bool *dirty, bool *writeback) @@ -1066,6 +1085,44 @@ static void page_check_dirty_writeback(s mapping->a_ops->is_dirty_writeback(page, dirty, writeback); } +static struct page *alloc_demote_page(struct page *page, unsigned long node) +{ + struct migration_target_control mtc = { + /* + * Fail quickly and quietly. Page will likely + * just be discarded instead of migrated. + */ + .gfp_mask = GFP_HIGHUSER | __GFP_NORETRY | __GFP_NOWARN, + .nid = node + }; + + return alloc_migration_target(page, (unsigned long)&mtc); +} + +/* + * Take pages on @demote_list and attempt to demote them to + * another node. Pages which are not demoted are left on + * @demote_pages. + */ +static unsigned int demote_page_list(struct list_head *demote_pages, + struct pglist_data *pgdat, + struct scan_control *sc) +{ + int target_nid = next_demotion_node(pgdat->node_id); + unsigned int nr_succeeded = 0; + int err; + + if (list_empty(demote_pages)) + return 0; + + /* Demotion ignores all cpuset and mempolicy settings */ + err = migrate_pages(demote_pages, alloc_demote_page, NULL, + target_nid, MIGRATE_ASYNC, MR_DEMOTION, + &nr_succeeded); + + return nr_succeeded; +} + /* * shrink_page_list() returns the number of reclaimed pages */ @@ -1078,12 +1135,15 @@ static unsigned int shrink_page_list(str { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); + LIST_HEAD(demote_pages); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; + bool do_demote_pass = true; memset(stat, 0, sizeof(*stat)); cond_resched(); +retry: while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; @@ -1233,6 +1293,16 @@ static unsigned int shrink_page_list(str } /* + * Before reclaiming the page, try to relocate + * its contents to another node. + */ + if (do_demote_pass && migrate_demote_page_ok(page)) { + list_add(&page->lru, &demote_pages); + unlock_page(page); + continue; + } + + /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. * Lazyfree page could be freed directly @@ -1479,6 +1549,17 @@ keep: list_add(&page->lru, &ret_pages); VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + /* 'page_list' is always empty here */ + + /* Migrate pages selected for demotion */ + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); + /* Pages that could not be demoted are still in @demote_pages */ + if (!list_empty(&demote_pages)) { + /* Pages which failed to demoted go back on on @page_list for retry: */ + list_splice_init(&demote_pages, page_list); + do_demote_pass = false; + goto retry; + } pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; From patchwork Tue Jan 26 00:34:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D1C2C433E0 for ; Tue, 26 Jan 2021 00:41:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7FB421D93 for ; Tue, 26 Jan 2021 00:41:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7FB421D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CF4628D006B; Mon, 25 Jan 2021 19:41:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C80E08D0065; Mon, 25 Jan 2021 19:41:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9EE18D006B; Mon, 25 Jan 2021 19:41:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id A3FA08D0065 for ; Mon, 25 Jan 2021 19:41:51 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 70D37180AD838 for ; Tue, 26 Jan 2021 00:41:51 +0000 (UTC) X-FDA: 77746073622.22.toes49_041659f2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 5141D18038E67 for ; Tue, 26 Jan 2021 00:41:51 +0000 (UTC) X-HE-Tag: toes49_041659f2758a X-Filterd-Recvd-Size: 3908 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:50 +0000 (UTC) IronPort-SDR: 1irHkxnjtUc6m99LCFlp9R1N7aSOMnxGcmi7GXQhOYveXsftJylpmeNc4Z3z8sKCvEmiUq45yq F5uTw5m3+bJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="177259364" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="177259364" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:49 -0800 IronPort-SDR: pQHygxC4qUnWqto+5NjrUq+YoSMkI/NVIU3ZhKR3nJ13hLODU6oenDR2jFFvRNRzK1wvpElQQg 4+SVD8xUt38Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="577639920" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga005.fm.intel.com with ESMTP; 25 Jan 2021 16:41:48 -0800 Subject: [RFC][PATCH 09/13] mm/vmscan: add page demotion counter To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:29 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003429.1045A904@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Account the number of demoted pages into reclaim_state->nr_demoted. Add pgdemote_kswapd and pgdemote_direct VM counters showed in /proc/vmstat. [ daveh: - __count_vm_events() a bit, and made them look at the THP size directly rather than getting data from migrate_pages() ] Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes since 202010: * remove unused scan-control 'demoted' field --- b/include/linux/vm_event_item.h | 2 ++ b/mm/vmscan.c | 5 +++++ b/mm/vmstat.c | 2 ++ 3 files changed, 9 insertions(+) diff -puN include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter 2021-01-25 16:23:15.821866693 -0800 +++ b/include/linux/vm_event_item.h 2021-01-25 16:23:15.831866693 -0800 @@ -33,6 +33,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS PGREUSE, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGDEMOTE_KSWAPD, + PGDEMOTE_DIRECT, PGSCAN_KSWAPD, PGSCAN_DIRECT, PGSCAN_DIRECT_THROTTLE, diff -puN mm/vmscan.c~mm-vmscan-add-page-demotion-counter mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-add-page-demotion-counter 2021-01-25 16:23:15.823866693 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:15.835866693 -0800 @@ -1120,6 +1120,11 @@ static unsigned int demote_page_list(str target_nid, MIGRATE_ASYNC, MR_DEMOTION, &nr_succeeded); + if (current_is_kswapd()) + __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded); + else + __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded); + return nr_succeeded; } diff -puN mm/vmstat.c~mm-vmscan-add-page-demotion-counter mm/vmstat.c --- a/mm/vmstat.c~mm-vmscan-add-page-demotion-counter 2021-01-25 16:23:15.825866693 -0800 +++ b/mm/vmstat.c 2021-01-25 16:23:15.838866693 -0800 @@ -1244,6 +1244,8 @@ const char * const vmstat_text[] = { "pgreuse", "pgsteal_kswapd", "pgsteal_direct", + "pgdemote_kswapd", + "pgdemote_direct", "pgscan_kswapd", "pgscan_direct", "pgscan_direct_throttle", From patchwork Tue Jan 26 00:34:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B58CC433DB for ; Tue, 26 Jan 2021 00:41:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E978F21D93 for ; Tue, 26 Jan 2021 00:41:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E978F21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 903608D006C; Mon, 25 Jan 2021 19:41:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 810D48D0065; Mon, 25 Jan 2021 19:41:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 578B98D006C; Mon, 25 Jan 2021 19:41:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 40DED8D0065 for ; Mon, 25 Jan 2021 19:41:52 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0FB181EE6 for ; Tue, 26 Jan 2021 00:41:52 +0000 (UTC) X-FDA: 77746073664.22.queen04_1900d1e2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id DFC1918038E67 for ; Tue, 26 Jan 2021 00:41:51 +0000 (UTC) X-HE-Tag: queen04_1900d1e2758a X-Filterd-Recvd-Size: 4200 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:50 +0000 (UTC) IronPort-SDR: Tw5fJd3A/lK5Y3NziMBkqosL/uUiq9BdFZySh+eaC5Fi29lxK/2EB7aE6Mt2Ry6Y4fC4thQQOp 7l5CW6Oa5Kvw== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="167500502" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="167500502" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:50 -0800 IronPort-SDR: SqQbFya8pBJyjCISCrzIQsncIzU0SYhJtTH0YCI0M9R0reNoVgJoG4XNXagfqux3P5ECijZwyv F3190jrHZ2Wg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="429496854" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga001.jf.intel.com with ESMTP; 25 Jan 2021 16:41:50 -0800 Subject: [RFC][PATCH 10/13] mm/vmscan: add helper for querying ability to age anonymous pages To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:31 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003431.19BDC239@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Anonymous pages are kept on their own LRU(s). These lists could theoretically always be scanned and maintained. But, without swap, there is currently nothing the kernel can *do* with the results of a scanned, sorted LRU for anonymous pages. A check for '!total_swap_pages' currently serves as a valid check as to whether anonymous LRUs should be maintained. However, another method will be added shortly: page demotion. Abstract out the 'total_swap_pages' checks into a helper, give it a logically significant name, and check for the possibility of page demotion. Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/vmscan.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff -puN mm/vmscan.c~mm-vmscan-anon-can-be-aged mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-anon-can-be-aged 2021-01-25 16:23:17.044866690 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:17.053866690 -0800 @@ -2508,6 +2508,26 @@ out: } } +/* + * Anonymous LRU management is a waste if there is + * ultimately no way to reclaim the memory. + */ +bool anon_should_be_aged(struct lruvec *lruvec) +{ + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + /* Aging the anon LRU is valuable if swap is present: */ + if (total_swap_pages > 0) + return true; + + /* Also valuable if anon pages can be demoted: */ + if (next_demotion_node(pgdat->node_id) >= 0) + return true; + + /* No way to reclaim anon pages. Should not age anon LRUs: */ + return false; +} + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2617,7 +2637,8 @@ static void shrink_lruvec(struct lruvec * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (anon_should_be_aged(lruvec) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -3446,10 +3467,11 @@ static void age_active_anon(struct pglis struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + + if (!anon_should_be_aged(lruvec)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; From patchwork Tue Jan 26 00:34:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E5FBC433DB for ; Tue, 26 Jan 2021 00:42:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0562B21D93 for ; Tue, 26 Jan 2021 00:42:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0562B21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 971598D006F; Mon, 25 Jan 2021 19:42:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 923C28D0065; Mon, 25 Jan 2021 19:42:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 836C78D006F; Mon, 25 Jan 2021 19:42:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 6EFBC8D0065 for ; Mon, 25 Jan 2021 19:42:04 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 362028249980 for ; Tue, 26 Jan 2021 00:42:04 +0000 (UTC) X-FDA: 77746074168.25.lock37_3c0d9fc2758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 16AD21804E3AD for ; Tue, 26 Jan 2021 00:42:04 +0000 (UTC) X-HE-Tag: lock37_3c0d9fc2758a X-Filterd-Recvd-Size: 5607 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:42:03 +0000 (UTC) IronPort-SDR: hfztYpYPYHBFGWpExixXxp06LQvI/lg1ncNk9yKWTLeL/AKYOcPzdVOBoDg/V/svuuonE7WnHT buPzPewYHNjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="159004921" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="159004921" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:42:01 -0800 IronPort-SDR: cDyaHQGO3HmqW3/Fjq2qN5bJ+/0yakWopGBe/OChV8g1zvHowKvfQKsR9FSWU8b616yWMWJa9K cd4oz8/PiHGQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="368924701" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2021 16:41:52 -0800 Subject: [RFC][PATCH 11/13] mm/vmscan: Consider anonymous pages without swap To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,kbusch@kernel.org,vishal.l.verma@intel.com,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:32 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003432.6E88B570@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Keith Busch Reclaim anonymous pages if a migration path is available now that demotion provides a non-swap recourse for reclaiming anon pages. Note that this check is subtly different from the anon_should_be_aged() checks. This mechanism checks whether a specific page in a specific context *can* actually be reclaimed, given current swap space and cgroup limits anon_should_be_aged() is a much simpler and more prelimiary check which just says whether there is a possibility of future reclaim. #Signed-off-by: Keith Busch Cc: Keith Busch [vishal: fixup the migration->demotion rename] Signed-off-by: Vishal Verma Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes from Dave 10/2020: * remove 'total_swap_pages' modification Changes from Dave 06/2020: * rename reclaim_anon_pages()->can_reclaim_anon_pages() Note: Keith's Intel SoB is commented out because he is no longer at Intel and his @intel.com mail will bouncee --- b/mm/vmscan.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c --- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2021-01-25 16:23:18.106866688 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:18.111866688 -0800 @@ -289,6 +289,34 @@ static bool writeback_throttling_sane(st } #endif +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, + int node_id) +{ + if (memcg == NULL) { + /* + * For non-memcg reclaim, is there + * space in any swap device? + */ + if (get_nr_swap_pages() > 0) + return true; + } else { + /* Is the memcg below its swap limit? */ + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) + return true; + } + + /* + * The page can not be swapped. + * + * Can it be reclaimed from this node via demotion? + */ + if (next_demotion_node(node_id) >= 0) + return true; + + /* No way to reclaim anon pages */ + return false; +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -300,7 +328,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -2323,6 +2351,7 @@ enum scan_balance { static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long *nr) { + struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct mem_cgroup *memcg = lruvec_memcg(lruvec); unsigned long anon_cost, file_cost, total_cost; int swappiness = mem_cgroup_swappiness(memcg); @@ -2333,7 +2362,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { scan_balance = SCAN_FILE; goto out; } @@ -2708,7 +2737,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, pgdat->node_id)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Tue Jan 26 00:34:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56AE9C433E0 for ; Tue, 26 Jan 2021 00:41:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0604D21D93 for ; Tue, 26 Jan 2021 00:41:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0604D21D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E0B078D006D; Mon, 25 Jan 2021 19:41:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D6A7D8D0065; Mon, 25 Jan 2021 19:41:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C333A8D006D; Mon, 25 Jan 2021 19:41:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id AA5E98D0065 for ; Mon, 25 Jan 2021 19:41:57 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6F65A181AF5C4 for ; Tue, 26 Jan 2021 00:41:57 +0000 (UTC) X-FDA: 77746073874.19.books93_250eb412758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 4D95F1ACC22 for ; Tue, 26 Jan 2021 00:41:57 +0000 (UTC) X-HE-Tag: books93_250eb412758a X-Filterd-Recvd-Size: 5061 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:56 +0000 (UTC) IronPort-SDR: otjL5g+iNq3l18XfNmSTT49SjfJ9meUXfVSfO+kDYo2Id26Gef6iR00Kpt/frTUpJbTad2Qmhd 96zsCGAVWzgQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="264650547" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="264650547" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:55 -0800 IronPort-SDR: yjZzL65jbf8kL4CjrDWZSh8ZXuZY8FWi/yazcxGLjzSlYBgqp5wBe4v2QEJ/yuGBbMmwgrs0PI HgI3BhCipgXw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="361754925" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga008.fm.intel.com with ESMTP; 25 Jan 2021 16:41:54 -0800 Subject: [RFC][PATCH 12/13] mm/vmscan: never demote for memcg reclaim To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:34 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003434.BD7626A8@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Global reclaim aims to reduce the amount of memory used on a given node or set of nodes. Migrating pages to another node serves this purpose. memcg reclaim is different. Its goal is to reduce the total memory consumption of the entire memcg, across all nodes. Migration does not assist memcg reclaim because it just moves page contents between nodes rather than actually reducing memory consumption. Signed-off-by: Dave Hansen Suggested-by: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/vmscan.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff -puN mm/vmscan.c~never-demote-for-memcg-reclaim mm/vmscan.c --- a/mm/vmscan.c~never-demote-for-memcg-reclaim 2021-01-25 16:23:19.180866685 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:19.185866685 -0800 @@ -290,7 +290,8 @@ static bool writeback_throttling_sane(st #endif static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, - int node_id) + int node_id, + struct scan_control *sc) { if (memcg == NULL) { /* @@ -328,7 +329,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL)) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -1065,7 +1066,8 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } -static bool migrate_demote_page_ok(struct page *page) +static bool migrate_demote_page_ok(struct page *page, + struct scan_control *sc) { int next_nid = next_demotion_node(page_to_nid(page)); @@ -1073,6 +1075,10 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + /* It is pointless to do demotion in memcg reclaim */ + if (cgroup_reclaim(sc)) + return false; + if (next_nid == NUMA_NO_NODE) return false; if (PageTransHuge(page) && !thp_migration_supported()) @@ -1329,7 +1335,7 @@ retry: * Before reclaiming the page, try to relocate * its contents to another node. */ - if (do_demote_pass && migrate_demote_page_ok(page)) { + if (do_demote_pass && migrate_demote_page_ok(page, sc)) { list_add(&page->lru, &demote_pages); unlock_page(page); continue; @@ -2362,7 +2368,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) { scan_balance = SCAN_FILE; goto out; } @@ -2737,7 +2743,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, pgdat->node_id)) + if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Tue Jan 26 00:34:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12045015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DF88C433E6 for ; Tue, 26 Jan 2021 00:42:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F41F021D93 for ; Tue, 26 Jan 2021 00:42:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F41F021D93 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 61A418D006E; Mon, 25 Jan 2021 19:41:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 551638D0065; Mon, 25 Jan 2021 19:41:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48FCD8D006E; Mon, 25 Jan 2021 19:41:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 319988D0065 for ; Mon, 25 Jan 2021 19:41:59 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 01D961EE6 for ; Tue, 26 Jan 2021 00:41:59 +0000 (UTC) X-FDA: 77746073958.08.route34_17101752758a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id D7C691819E772 for ; Tue, 26 Jan 2021 00:41:58 +0000 (UTC) X-HE-Tag: route34_17101752758a X-Filterd-Recvd-Size: 6366 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 00:41:58 +0000 (UTC) IronPort-SDR: SJIwVkKUhIRKR5s1S1B4wlXgTkGFIHz6w7qRgLBl+khW8SdIold8TLSIOnlxpNjlvBnHnlAr+W 1zL5RTPZlcCQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="176315538" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="176315538" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:57 -0800 IronPort-SDR: BrniMOY76IriTCP3bbDTFbpZxbrLHYZMoDWs56ZstqsyDKsXXhrf39az3QEIwn1YlUUWZ1k3km K+1A/gEOdHbA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="472556514" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga001.fm.intel.com with ESMTP; 25 Jan 2021 16:41:56 -0800 Subject: [RFC][PATCH 13/13] mm/migrate: new zone_reclaim_mode to enable reclaim migration To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:36 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003436.80749D77@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Some method is obviously needed to enable reclaim-based migration. Just like traditional autonuma, there will be some workloads that will benefit like workloads with more "static" configurations where hot pages stay hot and cold pages stay cold. If pages come and go from the hot and cold sets, the benefits of this approach will be more limited. The benefits are truly workload-based and *not* hardware-based. We do not believe that there is a viable threshold where certain hardware configurations should have this mechanism enabled while others do not. To be conservative, earlier work defaulted to disable reclaim- based migration and did not include a mechanism to enable it. This propses extending the existing "zone_reclaim_mode" (now now really node_reclaim_mode) as a method to enable it. We are open to any alternative that allows end users to enable this mechanism or disable it it workload harm is detected (just like traditional autonuma). Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/Documentation/admin-guide/sysctl/vm.rst | 9 +++++++++ b/include/linux/swap.h | 3 ++- b/include/uapi/linux/mempolicy.h | 1 + b/mm/vmscan.c | 6 ++++-- 4 files changed, 16 insertions(+), 3 deletions(-) diff -puN Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE Documentation/admin-guide/sysctl/vm.rst --- a/Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE 2021-01-25 16:23:43.721866624 -0800 +++ b/Documentation/admin-guide/sysctl/vm.rst 2021-01-25 16:23:43.732866624 -0800 @@ -971,6 +971,7 @@ This is value OR'ed together of 1 Zone reclaim on 2 Zone reclaim writes dirty pages out 4 Zone reclaim swaps pages +8 Zone reclaim migrates pages = =================================== zone_reclaim_mode is disabled by default. For file servers or workloads @@ -995,3 +996,11 @@ of other processes running on other node Allowing regular swap effectively restricts allocations to the local node unless explicitly overridden by memory policies or cpuset configurations. + +Page migration during reclaim is intended for systems with tiered memory +configurations. These systems have multiple types of memory with varied +performance characteristics instead of plain NUMA systems where the same +kind of memory is found at varied distances. Allowing page migration +during reclaim enables these systems to migrate pages from fast tiers to +slow tiers when the fast tier is under pressure. This migration is +performed before swap. diff -puN include/linux/swap.h~RECLAIM_MIGRATE include/linux/swap.h --- a/include/linux/swap.h~RECLAIM_MIGRATE 2021-01-25 16:23:43.723866624 -0800 +++ b/include/linux/swap.h 2021-01-25 16:23:43.732866624 -0800 @@ -384,7 +384,8 @@ extern int sysctl_min_slab_ratio; static inline bool node_reclaim_enabled(void) { /* Is any node_reclaim_mode bit set? */ - return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); + return node_reclaim_mode & (RECLAIM_ZONE |RECLAIM_WRITE| + RECLAIM_UNMAP|RECLAIM_MIGRATE); } extern void check_move_unevictable_pages(struct pagevec *pvec); diff -puN include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE include/uapi/linux/mempolicy.h --- a/include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE 2021-01-25 16:23:43.725866624 -0800 +++ b/include/uapi/linux/mempolicy.h 2021-01-25 16:23:43.732866624 -0800 @@ -69,5 +69,6 @@ enum { #define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ #define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ +#define RECLAIM_MIGRATE (1<<3) /* Migrate to other nodes during reclaim */ #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff -puN mm/vmscan.c~RECLAIM_MIGRATE mm/vmscan.c --- a/mm/vmscan.c~RECLAIM_MIGRATE 2021-01-25 16:23:43.728866624 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:43.734866624 -0800 @@ -1075,6 +1075,9 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + if (!(node_reclaim_mode & RECLAIM_MIGRATE)) + return false; + /* It is pointless to do demotion in memcg reclaim */ if (cgroup_reclaim(sc)) return false; @@ -1084,8 +1087,7 @@ static bool migrate_demote_page_ok(struc if (PageTransHuge(page) && !thp_migration_supported()) return false; - // FIXME: actually enable this later in the series - return false; + return true; }