From patchwork Fri Jun 18 06:15:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E263EC48BE8 for ; Fri, 18 Jun 2021 06:16:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 891EC61154 for ; Fri, 18 Jun 2021 06:16:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 891EC61154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA8566B0074; Fri, 18 Jun 2021 02:16:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E2C616B0072; Fri, 18 Jun 2021 02:16:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7EF36B0073; Fri, 18 Jun 2021 02:16:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 91D416B0070 for ; Fri, 18 Jun 2021 02:16:14 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2D6378249980 for ; Fri, 18 Jun 2021 06:16:14 +0000 (UTC) X-FDA: 78265834668.28.BDA09B8 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf11.hostedemail.com (Postfix) with ESMTP id 8432220010B5 for ; Fri, 18 Jun 2021 06:16:12 +0000 (UTC) IronPort-SDR: ZxxcQtI/os0XVfCdE3xYuwrjhzvaUaMC6usWGoHhSlxal7WwlfY7iRTvx8G3VHckX+DUYmJjMD ztkAu/A45pyQ== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027450" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027450" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:08 -0700 IronPort-SDR: Wm87a+T08xvVTQgJz0yUN5lkLiTwxvhuhdXgeok2Deb8tMmMCtRC1SBRlP1aoKKw2xZCMZsQqR ekPtJv+UTXiw== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573605" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:05 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 01/10] mm/numa: node demotion data structure and lookup Date: Fri, 18 Jun 2021 14:15:28 +0800 Message-Id: <20210618061537.434999-2-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8432220010B5 Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf11.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: 6fkaruryrdwtdx5y36zzzcqanymaitbo X-HE-Tag: 1623996972-88161 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Prepare for the kernel to auto-migrate pages to other memory nodes with a user defined node migration table. This allows creating single migration target for each NUMA node to enable the kernel to do NUMA page migrations instead of simply reclaiming colder pages. A node with no target is a "terminal node", so reclaim acts normally there. The migration target does not fundamentally _need_ to be a single node, but this implementation starts there to limit complexity. If you consider the migration path as a graph, cycles (loops) in the graph are disallowed. This avoids wasting resources by constantly migrating (A->B, B->A, A->B ...). The expectation is that cycles will never be allowed. Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- changes since 20200122: * Make node_demotion[] __read_mostly changes in July 2020: - Remove loop from next_demotion_node() and get_online_mems(). This means that the node returned by next_demotion_node() might now be offline, but the worst case is that the allocation fails. That's fine since it is transient. --- mm/migrate.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index b234c3f3acb7..6cab668132f9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1136,6 +1136,23 @@ static int __unmap_and_move(struct page *page, struct page *newpage, return rc; } +static int node_demotion[MAX_NUMNODES] __read_mostly = + {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; + +/** + * next_demotion_node() - Get the next node in the demotion path + * @node: The starting node to lookup the next node + * + * @returns: node id for next memory node in the demotion path hierarchy + * from @node; NUMA_NO_NODE if @node is terminal. This does not keep + * @node online or guarantee that it *continues* to be the next demotion + * target. + */ +int next_demotion_node(int node) +{ + return node_demotion[node]; +} + /* * Obtain the lock on page, remove all ptes and migrate the page * to the newly allocated page in newpage. From patchwork Fri Jun 18 06:15:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BCEEC49EA3 for ; Fri, 18 Jun 2021 06:16:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 18BEC61154 for ; Fri, 18 Jun 2021 06:16:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 18BEC61154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E30846B0071; Fri, 18 Jun 2021 02:16:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D93446B0072; Fri, 18 Jun 2021 02:16:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A84326B0073; Fri, 18 Jun 2021 02:16:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 6C9656B0071 for ; Fri, 18 Jun 2021 02:16:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F3B2DBEFF for ; Fri, 18 Jun 2021 06:16:14 +0000 (UTC) X-FDA: 78265834668.12.7B10532 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf11.hostedemail.com (Postfix) with ESMTP id 53DEF20010B1 for ; Fri, 18 Jun 2021 06:16:14 +0000 (UTC) IronPort-SDR: XbEK4cvzAWbOsMNmgoJuN7A3ARzLf8SKzxrlr0YzlsjmDEieSO+U2DKWG/QiX7o1e88va8hCEy xpaEFNeIqeGg== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027457" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027457" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:11 -0700 IronPort-SDR: 5JxxUwYkV2tZhZZ8UbRGbrsCRUXKWQHTRcwWYNdhemhGCsgl/BnAfWWvD6eWWAiF6xbW1QklYE DWLLct0P57Kw== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573616" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:08 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 02/10] mm/numa: automatically generate node migration order Date: Fri, 18 Jun 2021 14:15:29 +0800 Message-Id: <20210618061537.434999-3-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 53DEF20010B1 Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf11.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: 79j7px34fgnoxd5a9jhds3wy81t7dcnd X-HE-Tag: 1623996974-244717 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen When memory fills up on a node, memory contents can be automatically migrated to another node. The biggest problems are knowing when to migrate and to where the migration should be targeted. The most straightforward way to generate the "to where" list would be to follow the page allocator fallback lists. Those lists already tell us if memory is full where to look next. It would also be logical to move memory in that order. But, the allocator fallback lists have a fatal flaw: most nodes appear in all the lists. This would potentially lead to migration cycles (A->B, B->A, A->B, ...). Instead of using the allocator fallback lists directly, keep a separate node migration ordering. But, reuse the same data used to generate page allocator fallback in the first place: find_next_best_node(). This means that the firmware data used to populate node distances essentially dictates the ordering for now. It should also be architecture-neutral since all NUMA architectures have a working find_next_best_node(). The protocol for node_demotion[] access and writing is not standard. It has no specific locking and is intended to be read locklessly. Readers must take care to avoid observing changes that appear incoherent. This was done so that node_demotion[] locking has no chance of becoming a bottleneck on large systems with lots of CPUs in direct reclaim. This code is unused for now. It will be called later in the series. Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes from 20200122: * Add big node_demotion[] comment Changes from 20210302: * Fix typo in node_demotion[] comment --- mm/internal.h | 5 ++ mm/migrate.c | 175 +++++++++++++++++++++++++++++++++++++++++++++++- mm/page_alloc.c | 2 +- 3 files changed, 180 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 2f1182948aa6..0344cd78e170 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -522,12 +522,17 @@ static inline void mminit_validate_memmodel_limits(unsigned long *start_pfn, #ifdef CONFIG_NUMA extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); +extern int find_next_best_node(int node, nodemask_t *used_node_mask); #else static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, unsigned int order) { return NODE_RECLAIM_NOSCAN; } +static inline int find_next_best_node(int node, nodemask_t *used_node_mask) +{ + return NUMA_NO_NODE; +} #endif extern int hwpoison_filter(struct page *p); diff --git a/mm/migrate.c b/mm/migrate.c index 6cab668132f9..111f8565f75d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1136,6 +1136,44 @@ static int __unmap_and_move(struct page *page, struct page *newpage, return rc; } + +/* + * node_demotion[] example: + * + * Consider a system with two sockets. Each socket has + * three classes of memory attached: fast, medium and slow. + * Each memory class is placed in its own NUMA node. The + * CPUs are placed in the node with the "fast" memory. The + * 6 NUMA nodes (0-5) might be split among the sockets like + * this: + * + * Socket A: 0, 1, 2 + * Socket B: 3, 4, 5 + * + * When Node 0 fills up, its memory should be migrated to + * Node 1. When Node 1 fills up, it should be migrated to + * Node 2. The migration path start on the nodes with the + * processors (since allocations default to this node) and + * fast memory, progress through medium and end with the + * slow memory: + * + * 0 -> 1 -> 2 -> stop + * 3 -> 4 -> 5 -> stop + * + * This is represented in the node_demotion[] like this: + * + * { 1, // Node 0 migrates to 1 + * 2, // Node 1 migrates to 2 + * -1, // Node 2 does not migrate + * 4, // Node 3 migrates to 4 + * 5, // Node 4 migrates to 5 + * -1} // Node 5 does not migrate + */ + +/* + * Writes to this array occur without locking. READ_ONCE() + * is recommended for readers to ensure consistent reads. + */ static int node_demotion[MAX_NUMNODES] __read_mostly = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; @@ -1150,7 +1188,13 @@ static int node_demotion[MAX_NUMNODES] __read_mostly = */ int next_demotion_node(int node) { - return node_demotion[node]; + /* + * node_demotion[] is updated without excluding + * this function from running. READ_ONCE() avoids + * reading multiple, inconsistent 'node' values + * during an update. + */ + return READ_ONCE(node_demotion[node]); } /* @@ -3144,3 +3188,132 @@ void migrate_vma_finalize(struct migrate_vma *migrate) } EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ + +/* Disable reclaim-based migration. */ +static void disable_all_migrate_targets(void) +{ + int node; + + for_each_online_node(node) + node_demotion[node] = NUMA_NO_NODE; +} + +/* + * Find an automatic demotion target for 'node'. + * Failing here is OK. It might just indicate + * being at the end of a chain. + */ +static int establish_migrate_target(int node, nodemask_t *used) +{ + int migration_target; + + /* + * Can not set a migration target on a + * node with it already set. + * + * No need for READ_ONCE() here since this + * in the write path for node_demotion[]. + * This should be the only thread writing. + */ + if (node_demotion[node] != NUMA_NO_NODE) + return NUMA_NO_NODE; + + migration_target = find_next_best_node(node, used); + if (migration_target == NUMA_NO_NODE) + return NUMA_NO_NODE; + + node_demotion[node] = migration_target; + + return migration_target; +} + +/* + * When memory fills up on a node, memory contents can be + * automatically migrated to another node instead of + * discarded at reclaim. + * + * Establish a "migration path" which will start at nodes + * with CPUs and will follow the priorities used to build the + * page allocator zonelists. + * + * The difference here is that cycles must be avoided. If + * node0 migrates to node1, then neither node1, nor anything + * node1 migrates to can migrate to node0. + * + * This function can run simultaneously with readers of + * node_demotion[]. However, it can not run simultaneously + * with itself. Exclusion is provided by memory hotplug events + * being single-threaded. + */ +static void __set_migration_target_nodes(void) +{ + nodemask_t next_pass = NODE_MASK_NONE; + nodemask_t this_pass = NODE_MASK_NONE; + nodemask_t used_targets = NODE_MASK_NONE; + int node; + + /* + * Avoid any oddities like cycles that could occur + * from changes in the topology. This will leave + * a momentary gap when migration is disabled. + */ + disable_all_migrate_targets(); + + /* + * Ensure that the "disable" is visible across the system. + * Readers will see either a combination of before+disable + * state or disable+after. They will never see before and + * after state together. + * + * The before+after state together might have cycles and + * could cause readers to do things like loop until this + * function finishes. This ensures they can only see a + * single "bad" read and would, for instance, only loop + * once. + */ + smp_wmb(); + + /* + * Allocations go close to CPUs, first. Assume that + * the migration path starts at the nodes with CPUs. + */ + next_pass = node_states[N_CPU]; +again: + this_pass = next_pass; + next_pass = NODE_MASK_NONE; + /* + * To avoid cycles in the migration "graph", ensure + * that migration sources are not future targets by + * setting them in 'used_targets'. Do this only + * once per pass so that multiple source nodes can + * share a target node. + * + * 'used_targets' will become unavailable in future + * passes. This limits some opportunities for + * multiple source nodes to share a destination. + */ + nodes_or(used_targets, used_targets, this_pass); + for_each_node_mask(node, this_pass) { + int target_node = establish_migrate_target(node, &used_targets); + + if (target_node == NUMA_NO_NODE) + continue; + + /* Visit targets from this pass in the next pass: */ + node_set(target_node, next_pass); + } + /* Is another pass necessary? */ + if (!nodes_empty(next_pass)) + goto again; +} + +/* + * For callers that do not hold get_online_mems() already. + */ +__maybe_unused // <- temporay to prevent warnings during bisects +static void set_migration_target_nodes(void) +{ + get_online_mems(); + __set_migration_target_nodes(); + put_online_mems(); +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1f5de1c1283..e033ae2e8bce 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5973,7 +5973,7 @@ static int node_load[MAX_NUMNODES]; * * Return: node id of the found node or %NUMA_NO_NODE if no node is found. */ -static int find_next_best_node(int node, nodemask_t *used_node_mask) +int find_next_best_node(int node, nodemask_t *used_node_mask) { int n, val; int min_val = INT_MAX; From patchwork Fri Jun 18 06:15:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C81AC49EA7 for ; Fri, 18 Jun 2021 06:16:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEBFE6120A for ; Fri, 18 Jun 2021 06:16:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEBFE6120A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 974B16B0072; Fri, 18 Jun 2021 02:16:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9247E6B0073; Fri, 18 Jun 2021 02:16:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 756726B0075; Fri, 18 Jun 2021 02:16:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id 1CCF46B0072 for ; Fri, 18 Jun 2021 02:16:18 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B39FA8249980 for ; Fri, 18 Jun 2021 06:16:17 +0000 (UTC) X-FDA: 78265834794.17.0BF2EE6 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf16.hostedemail.com (Postfix) with ESMTP id 5DB448019128 for ; Fri, 18 Jun 2021 06:16:15 +0000 (UTC) IronPort-SDR: dA8ny4AQwBvOBmCdxZSDkuwegrggJrHfG3vSiuIPIJsy/KRsD5a4Cw+8QbrNnmh1IeLimud30B Oobn8FoJcpjA== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027462" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027462" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:14 -0700 IronPort-SDR: +kvNgZS2RAM3Y/5W3lGx53DcwEVJtUWVpa1c9dUTmXCr+hUba1Q/gv67Wb3FNU7glEmGH9sMRf cfn8RCJVN12Q== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573628" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:11 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 03/10] mm/migrate: update node demotion order during on hotplug events Date: Fri, 18 Jun 2021 14:15:30 +0800 Message-Id: <20210618061537.434999-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf16.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com X-Rspamd-Server: rspam02 X-Stat-Signature: 7w3aaiakzsjtor76t1obd9umywnnfwrh X-Rspamd-Queue-Id: 5DB448019128 X-HE-Tag: 1623996975-537050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Reclaim-based migration is attempting to optimize data placement in memory based on the system topology. If the system changes, so must the migration ordering. The implementation is conceptually simple and entirely unoptimized. On any memory or CPU hotplug events, assume that a node was added or removed and recalculate all migration targets. This ensures that the node_demotion[] array is always ready to be used in case the new reclaim mode is enabled. This recalculation is far from optimal, most glaringly that it does not even attempt to figure out the hotplug event would have some *actual* effect on the demotion order. But, given the expected paucity of hotplug events, this should be fine. === What does RCU provide? === Imaginge a simple loop which walks down the demotion path looking for the last node: terminal_node = start_node; while (node_demotion[terminal_node] != NUMA_NO_NODE) { terminal_node = node_demotion[terminal_node]; } The initial values are: node_demotion[0] = 1; node_demotion[1] = NUMA_NO_NODE; and are updated to: node_demotion[0] = NUMA_NO_NODE; node_demotion[1] = 0; What guarantees that the loop did not observe: node_demotion[0] = 1; node_demotion[1] = 0; and would loop forever? With RCU, a rcu_read_lock/unlock() can be placed around the loop. Since the write side does a synchronize_rcu(), the loop that observed the old contents is known to be complete after the synchronize_rcu() has completed. RCU, combined with disable_all_migrate_targets(), ensures that the old migration state is not visible by the time __set_migration_target_nodes() is called. === What does READ_ONCE() provide? === READ_ONCE() forbids the compiler from merging or reordering successive reads of node_demotion[]. This ensures that any updates are *eventually* observed. Consider the above loop again. The compiler could theoretically read the entirety of node_demotion[] into local storage (registers) and never go back to memory, and *permanently* observe bad values for node_demotion[]. Note: RCU does not provide any universal compiler-ordering guarantees: https://lore.kernel.org/lkml/20150921204327.GH4029@linux.vnet.ibm.com/ Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes since 20210302: * remove duplicate synchronize_rcu() --- mm/migrate.c | 152 +++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 129 insertions(+), 23 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 111f8565f75d..0aad54d6c8f9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -49,6 +49,7 @@ #include #include #include +#include #include @@ -1171,8 +1172,12 @@ static int __unmap_and_move(struct page *page, struct page *newpage, */ /* - * Writes to this array occur without locking. READ_ONCE() - * is recommended for readers to ensure consistent reads. + * Writes to this array occur without locking. Cycles are + * not allowed: Node X demotes to Y which demotes to X... + * + * If multiple reads are performed, a single rcu_read_lock() + * must be held over all reads to ensure that no cycles are + * observed. */ static int node_demotion[MAX_NUMNODES] __read_mostly = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; @@ -1188,13 +1193,22 @@ static int node_demotion[MAX_NUMNODES] __read_mostly = */ int next_demotion_node(int node) { + int target; + /* - * node_demotion[] is updated without excluding - * this function from running. READ_ONCE() avoids - * reading multiple, inconsistent 'node' values - * during an update. + * node_demotion[] is updated without excluding this + * function from running. RCU doesn't provide any + * compiler barriers, so the READ_ONCE() is required + * to avoid compiler reordering or read merging. + * + * Make sure to use RCU over entire code blocks if + * node_demotion[] reads need to be consistent. */ - return READ_ONCE(node_demotion[node]); + rcu_read_lock(); + target = READ_ONCE(node_demotion[node]); + rcu_read_unlock(); + + return target; } /* @@ -3189,8 +3203,9 @@ void migrate_vma_finalize(struct migrate_vma *migrate) EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ +#if defined(CONFIG_MEMORY_HOTPLUG) /* Disable reclaim-based migration. */ -static void disable_all_migrate_targets(void) +static void __disable_all_migrate_targets(void) { int node; @@ -3198,6 +3213,25 @@ static void disable_all_migrate_targets(void) node_demotion[node] = NUMA_NO_NODE; } +static void disable_all_migrate_targets(void) +{ + __disable_all_migrate_targets(); + + /* + * Ensure that the "disable" is visible across the system. + * Readers will see either a combination of before+disable + * state or disable+after. They will never see before and + * after state together. + * + * The before+after state together might have cycles and + * could cause readers to do things like loop until this + * function finishes. This ensures they can only see a + * single "bad" read and would, for instance, only loop + * once. + */ + synchronize_rcu(); +} + /* * Find an automatic demotion target for 'node'. * Failing here is OK. It might just indicate @@ -3259,20 +3293,6 @@ static void __set_migration_target_nodes(void) */ disable_all_migrate_targets(); - /* - * Ensure that the "disable" is visible across the system. - * Readers will see either a combination of before+disable - * state or disable+after. They will never see before and - * after state together. - * - * The before+after state together might have cycles and - * could cause readers to do things like loop until this - * function finishes. This ensures they can only see a - * single "bad" read and would, for instance, only loop - * once. - */ - smp_wmb(); - /* * Allocations go close to CPUs, first. Assume that * the migration path starts at the nodes with CPUs. @@ -3310,10 +3330,96 @@ static void __set_migration_target_nodes(void) /* * For callers that do not hold get_online_mems() already. */ -__maybe_unused // <- temporay to prevent warnings during bisects static void set_migration_target_nodes(void) { get_online_mems(); __set_migration_target_nodes(); put_online_mems(); } + +/* + * React to hotplug events that might affect the migration targets + * like events that online or offline NUMA nodes. + * + * The ordering is also currently dependent on which nodes have + * CPUs. That means we need CPU on/offline notification too. + */ +static int migration_online_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +static int migration_offline_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +/* + * This leaves migrate-on-reclaim transiently disabled between + * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs + * whether reclaim-based migration is enabled or not, which + * ensures that the user can turn reclaim-based migration at + * any time without needing to recalculate migration targets. + * + * These callbacks already hold get_online_mems(). That is why + * __set_migration_target_nodes() can be used as opposed to + * set_migration_target_nodes(). + */ +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, + unsigned long action, void *arg) +{ + switch (action) { + case MEM_GOING_OFFLINE: + /* + * Make sure there are not transient states where + * an offline node is a migration target. This + * will leave migration disabled until the offline + * completes and the MEM_OFFLINE case below runs. + */ + disable_all_migrate_targets(); + break; + case MEM_OFFLINE: + case MEM_ONLINE: + /* + * Recalculate the target nodes once the node + * reaches its final state (online or offline). + */ + __set_migration_target_nodes(); + break; + case MEM_CANCEL_OFFLINE: + /* + * MEM_GOING_OFFLINE disabled all the migration + * targets. Reenable them. + */ + __set_migration_target_nodes(); + break; + case MEM_GOING_ONLINE: + case MEM_CANCEL_ONLINE: + break; + } + + return notifier_from_errno(0); +} + +static int __init migrate_on_reclaim_init(void) +{ + int ret; + + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim", + migration_online_cpu, + migration_offline_cpu); + /* + * In the unlikely case that this fails, the automatic + * migration targets may become suboptimal for nodes + * where N_CPU changes. With such a small impact in a + * rare case, do not bother trying to do anything special. + */ + WARN_ON(ret < 0); + + hotplug_memory_notifier(migrate_on_reclaim_callback, 100); + return 0; +} +late_initcall(migrate_on_reclaim_init); +#endif /* CONFIG_MEMORY_HOTPLUG */ From patchwork Fri Jun 18 06:15:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 363B8C48BE8 for ; Fri, 18 Jun 2021 06:16:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CBE7B61154 for ; Fri, 18 Jun 2021 06:16:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBE7B61154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA53D6B0073; Fri, 18 Jun 2021 02:16:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E55926B0075; Fri, 18 Jun 2021 02:16:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B99676B0078; Fri, 18 Jun 2021 02:16:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 7FC9F6B0073 for ; Fri, 18 Jun 2021 02:16:20 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 21F628249980 for ; Fri, 18 Jun 2021 06:16:20 +0000 (UTC) X-FDA: 78265834920.01.ADBE021 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id 636E8A0021C3 for ; Fri, 18 Jun 2021 06:16:19 +0000 (UTC) IronPort-SDR: YYihCbSv2y8oXuMN8TogoTMOPkmd5rv5lILueNQbXcPb8M5OkvmZn85jc48SEf7SAVtnvDphIC 2bYEoEIkXpEg== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027467" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027467" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:18 -0700 IronPort-SDR: RE5TNRj3cjjHqfTn2bqA2CNV+U2CzzMHbGQIfA47uS3vfrey+lqj0285vWbXStLcGldPpAriX0 c/2lNPeQ6B/A== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573641" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:14 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Yang Shi , Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 04/10] mm/migrate: make migrate_pages() return nr_succeeded Date: Fri, 18 Jun 2021 14:15:31 +0800 Message-Id: <20210618061537.434999-5-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 636E8A0021C3 X-Stat-Signature: xqqrihz9wdmrxumr1p4eryou1zxjw7ad X-HE-Tag: 1623996979-922410 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi The migrate_pages() returns the number of pages that were not migrated, or an error code. When returning an error code, there is no way to know how many pages were migrated or not migrated. In the following patch, migrate_pages() is used to demote pages to PMEM node, we need account how many pages are reclaimed (demoted) since page reclaim behavior depends on this. Add *nr_succeeded parameter to make migrate_pages() return how many pages are demoted successfully for all cases. Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Note: Yang Shi originally wrote the patch, thus the SoB. There was also a Reviewed-by provided since there were some modifications made to this after the original work. Changes since 20210302: * Fix definition of CONFIG_MIGRATION=n stub migrate_pages(). Its parameters were wrong, but oddly enough did not generate any compile errors. Changes since 20200122: * Fix migrate_pages() to manipulate nr_succeeded *value* rather than the pointer. --- include/linux/migrate.h | 5 +++-- mm/compaction.c | 3 ++- mm/gup.c | 3 ++- mm/memory-failure.c | 4 +++- mm/memory_hotplug.c | 4 +++- mm/mempolicy.c | 8 ++++++-- mm/migrate.c | 19 +++++++++++-------- mm/page_alloc.c | 9 ++++++--- 8 files changed, 36 insertions(+), 19 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4bb4e519e3f5..4a49bb358787 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -41,7 +41,8 @@ extern int migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, - unsigned long private, enum migrate_mode mode, int reason); + unsigned long private, enum migrate_mode mode, int reason, + unsigned int *nr_succeeded); extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); @@ -56,7 +57,7 @@ extern int migrate_page_move_mapping(struct address_space *mapping, static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, - int reason) + int reason, unsigned int *nr_succeeded) { return -ENOSYS; } static inline struct page *alloc_migration_target(struct page *page, unsigned long private) diff --git a/mm/compaction.c b/mm/compaction.c index 84fde270ae74..43830a025fc1 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2282,6 +2282,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; bool update_cached; + unsigned int nr_succeeded = 0; /* * These counters track activities during zone compaction. Initialize @@ -2400,7 +2401,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) err = migrate_pages(&cc->migratepages, compaction_alloc, compaction_free, (unsigned long)cc, cc->mode, - MR_COMPACTION); + MR_COMPACTION, &nr_succeeded); trace_mm_compaction_migratepages(cc->nr_migratepages, err, &cc->migratepages); diff --git a/mm/gup.c b/mm/gup.c index 3ded6a5f26b2..5da01e921142 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1611,6 +1611,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, unsigned long i; unsigned long isolation_error_count = 0; bool drain_allow = true; + unsigned int nr_succeeded = 0; LIST_HEAD(movable_page_list); long ret = 0; struct page *prev_head = NULL; @@ -1668,7 +1669,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, if (!list_empty(&movable_page_list)) { ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, - MR_LONGTERM_PIN); + MR_LONGTERM_PIN, &nr_succeeded); if (ret && !list_empty(&movable_page_list)) putback_movable_pages(&movable_page_list); } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 85ad98c00fd9..0c24575ab97f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1809,6 +1809,7 @@ static int __soft_offline_page(struct page *page) unsigned long pfn = page_to_pfn(page); struct page *hpage = compound_head(page); char const *msg_page[] = {"page", "hugepage"}; + unsigned int nr_succeeded = 0; bool huge = PageHuge(page); LIST_HEAD(pagelist); struct migration_target_control mtc = { @@ -1852,7 +1853,8 @@ static int __soft_offline_page(struct page *page) if (isolate_page(hpage, &pagelist)) { ret = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, + &nr_succeeded); if (!ret) { bool release = !huge; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 70620d0dd923..b6f4ec6b82f5 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1520,6 +1520,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) unsigned long pfn; struct page *page, *head; int ret = 0; + unsigned int nr_succeeded = 0; LIST_HEAD(source); for (pfn = start_pfn; pfn < end_pfn; pfn++) { @@ -1594,7 +1595,8 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (nodes_empty(nmask)) node_set(mtc.nid, nmask); ret = migrate_pages(&source, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, + &nr_succeeded); if (ret) { list_for_each_entry(page, &source, lru) { pr_warn("migrating pfn %lx failed ret:%d ", diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d79fa299b70c..25dceedbb884 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1081,6 +1081,7 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, static int migrate_to_node(struct mm_struct *mm, int source, int dest, int flags) { + unsigned int nr_succeeded = 0; nodemask_t nmask; LIST_HEAD(pagelist); int err = 0; @@ -1103,7 +1104,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, if (!list_empty(&pagelist)) { err = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(&pagelist); } @@ -1280,6 +1282,7 @@ static long do_mbind(unsigned long start, unsigned long len, nodemask_t *nmask, unsigned long flags) { struct mm_struct *mm = current->mm; + unsigned int nr_succeeded = 0; struct mempolicy *new; unsigned long end; int err; @@ -1357,7 +1360,8 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); nr_failed = migrate_pages(&pagelist, new_page, NULL, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, + &nr_succeeded); if (nr_failed) putback_movable_pages(&pagelist); } diff --git a/mm/migrate.c b/mm/migrate.c index 0aad54d6c8f9..a9b90ec28dfd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1466,6 +1466,7 @@ static inline int try_split_thp(struct page *page, struct page **page2, * @mode: The migration mode that specifies the constraints for * page migration, if any. * @reason: The reason for page migration. + * @nr_succeeded: The number of pages migrated successfully. * * The function returns after 10 attempts or if no pages are movable any more * because the list has become empty or no retryable pages exist any more. @@ -1476,12 +1477,11 @@ static inline int try_split_thp(struct page *page, struct page **page2, */ int migrate_pages(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, - enum migrate_mode mode, int reason) + enum migrate_mode mode, int reason, unsigned int *nr_succeeded) { int retry = 1; int thp_retry = 1; int nr_failed = 0; - int nr_succeeded = 0; int nr_thp_succeeded = 0; int nr_thp_failed = 0; int nr_thp_split = 0; @@ -1586,10 +1586,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, case MIGRATEPAGE_SUCCESS: if (is_thp) { nr_thp_succeeded++; - nr_succeeded += nr_subpages; + *nr_succeeded += nr_subpages; break; } - nr_succeeded++; + (*nr_succeeded)++; break; default: /* @@ -1618,12 +1618,12 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ list_splice(&ret_pages, from); - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); + count_vm_events(PGMIGRATE_SUCCESS, *nr_succeeded); count_vm_events(PGMIGRATE_FAIL, nr_failed); count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); - trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded, + trace_mm_migrate_pages(*nr_succeeded, nr_failed, nr_thp_succeeded, nr_thp_failed, nr_thp_split, mode, reason); if (!swapwrite) @@ -1691,6 +1691,7 @@ static int store_status(int __user *status, int start, int value, int nr) static int do_move_pages_to_node(struct mm_struct *mm, struct list_head *pagelist, int node) { + unsigned int nr_succeeded = 0; int err; struct migration_target_control mtc = { .nid = node, @@ -1698,7 +1699,8 @@ static int do_move_pages_to_node(struct mm_struct *mm, }; err = migrate_pages(pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(pagelist); return err; @@ -2172,6 +2174,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, pg_data_t *pgdat = NODE_DATA(node); int isolated; int nr_remaining; + unsigned int nr_succeeded = 0; LIST_HEAD(migratepages); /* @@ -2196,7 +2199,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page, NULL, node, MIGRATE_ASYNC, - MR_NUMA_MISPLACED); + MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e033ae2e8bce..edd50c46440e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8668,7 +8668,8 @@ static inline void alloc_contig_dump_pages(struct list_head *page_list) /* [start, end) must belong to a single zone. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + unsigned int *nr_succeeded) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -8705,7 +8706,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, cc->nr_migratepages -= nr_reclaimed; ret = migrate_pages(&cc->migratepages, alloc_migration_target, - NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE); + NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE, + nr_succeeded); /* * On -ENOMEM, migrate_pages() bails out right away. It is pointless @@ -8751,6 +8753,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, unsigned long outer_start, outer_end; unsigned int order; int ret = 0; + unsigned int nr_succeeded = 0; struct compact_control cc = { .nr_migratepages = 0, @@ -8805,7 +8808,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end); + ret = __alloc_contig_migrate_range(&cc, start, end, &nr_succeeded); if (ret && ret != -EBUSY) goto done; ret = 0; From patchwork Fri Jun 18 06:15:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24FB3C48BDF for ; Fri, 18 Jun 2021 06:16:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C516361260 for ; Fri, 18 Jun 2021 06:16:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C516361260 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A86666B0075; Fri, 18 Jun 2021 02:16:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0EE16B0078; Fri, 18 Jun 2021 02:16:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 862136B007B; Fri, 18 Jun 2021 02:16:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 435B96B0075 for ; Fri, 18 Jun 2021 02:16:22 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E180BD220 for ; Fri, 18 Jun 2021 06:16:21 +0000 (UTC) X-FDA: 78265834962.26.F53E570 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id 1392BA0009EC for ; Fri, 18 Jun 2021 06:16:20 +0000 (UTC) IronPort-SDR: ahF+Phdi9fObcU5PjrfixBvE+p0p09uxIS8ZAHI1PrJQsVOE/GsCAglNJF2y2cHdqPxS79Nc6d rXQhLapBFoOw== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027474" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027474" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:20 -0700 IronPort-SDR: wAbcD0UxpsM1FKmAJjPQKKewDBOlWHpo5wAHmYwsaVrEUrUy7jYcx+3i23NmDyfuMPtcZyp2j4 HD7v2QVc9Bcg== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573657" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:17 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Michal Hocko , Wei Xu , Yang Shi , David Rientjes , Dan Williams , osalvador Subject: [PATCH -V8 05/10] mm/migrate: demote pages during reclaim Date: Fri, 18 Jun 2021 14:15:32 +0800 Message-Id: <20210618061537.434999-6-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1392BA0009EC X-Stat-Signature: sgkwzdezoi6wcsepsakai9nhf8poawim X-HE-Tag: 1623996980-557307 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen This is mostly derived from a patch from Yang Shi: https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ Add code to the reclaim path (shrink_page_list()) to "demote" data to another NUMA node instead of discarding the data. This always avoids the cost of I/O needed to read the page back in and sometimes avoids the writeout cost when the pagee is dirty. A second pass through shrink_page_list() will be made if any demotions fail. This essentally falls back to normal reclaim behavior in the case that demotions fail. Previous versions of this patch may have simply failed to reclaim pages which were eligible for demotion but were unable to be demoted in practice. Note: This just adds the start of infratructure for migration. It is actually disabled next to the FIXME in migrate_demote_page_ok(). Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Cc: Michal Hocko Cc: Wei Xu Cc: Yang Shi Cc: David Rientjes Cc: Dan Williams Cc: osalvador --- changes from 20210122: * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE (Ying) changes from 202010: * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define * make migrate_demote_page_ok() static, remove 'sc' arg until later patch * remove unnecessary alloc_demote_page() hugetlb warning * Simplify alloc_demote_page() gfp mask. Depend on __GFP_NORETRY to make it lightweight instead of fancier stuff like leaving out __GFP_IO/FS. * Allocate migration page with alloc_migration_target() instead of allocating directly. changes from 20200730: * Add another pass through shrink_page_list() when demotion fails. changes from 20210302: * Use __GFP_THISNODE and revise the comment explaining the GFP mask constructionn --- include/linux/migrate.h | 9 ++++ include/trace/events/migrate.h | 3 +- mm/vmscan.c | 83 ++++++++++++++++++++++++++++++++++ 3 files changed, 94 insertions(+), 1 deletion(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4a49bb358787..42952cbe452b 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -28,6 +28,7 @@ enum migrate_reason { MR_NUMA_MISPLACED, MR_CONTIG_RANGE, MR_LONGTERM_PIN, + MR_DEMOTION, MR_TYPES }; @@ -191,6 +192,14 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int next_demotion_node(int node); + +#else /* CONFIG_MIGRATION disabled: */ + +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} #endif /* CONFIG_MIGRATION */ diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 9fb2a3bbcdfb..779f3fad9ecd 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -21,7 +21,8 @@ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ EM( MR_CONTIG_RANGE, "contig_range") \ - EMe(MR_LONGTERM_PIN, "longterm_pin") + EM( MR_LONGTERM_PIN, "longterm_pin") \ + EMe(MR_DEMOTION, "demotion") /* * First define the enums in the above macros to be exported to userspace diff --git a/mm/vmscan.c b/mm/vmscan.c index 5199b9696bab..ddda32031f0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include #include @@ -1231,6 +1232,23 @@ static enum page_references page_check_references(struct page *page, return PAGEREF_RECLAIM; } +static bool migrate_demote_page_ok(struct page *page) +{ + int next_nid = next_demotion_node(page_to_nid(page)); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageHuge(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (next_nid == NUMA_NO_NODE) + return false; + if (PageTransHuge(page) && !thp_migration_supported()) + return false; + + // FIXME: actually enable this later in the series + return false; +} + /* Check if a page is dirty or under writeback */ static void page_check_dirty_writeback(struct page *page, bool *dirty, bool *writeback) @@ -1261,6 +1279,47 @@ static void page_check_dirty_writeback(struct page *page, mapping->a_ops->is_dirty_writeback(page, dirty, writeback); } +static struct page *alloc_demote_page(struct page *page, unsigned long node) +{ + struct migration_target_control mtc = { + /* + * Allocate from 'node', or fail the quickly and quietly. + * When this happens, 'page; will likely just be discarded + * instead of migrated. + */ + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | + __GFP_THISNODE | __GFP_NOWARN | + __GFP_NOMEMALLOC | GFP_NOWAIT, + .nid = node + }; + + return alloc_migration_target(page, (unsigned long)&mtc); +} + +/* + * Take pages on @demote_list and attempt to demote them to + * another node. Pages which are not demoted are left on + * @demote_pages. + */ +static unsigned int demote_page_list(struct list_head *demote_pages, + struct pglist_data *pgdat, + struct scan_control *sc) +{ + int target_nid = next_demotion_node(pgdat->node_id); + unsigned int nr_succeeded = 0; + int err; + + if (list_empty(demote_pages)) + return 0; + + /* Demotion ignores all cpuset and mempolicy settings */ + err = migrate_pages(demote_pages, alloc_demote_page, NULL, + target_nid, MIGRATE_ASYNC, MR_DEMOTION, + &nr_succeeded); + + return nr_succeeded; +} + /* * shrink_page_list() returns the number of reclaimed pages */ @@ -1272,12 +1331,15 @@ static unsigned int shrink_page_list(struct list_head *page_list, { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); + LIST_HEAD(demote_pages); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; + bool do_demote_pass = true; memset(stat, 0, sizeof(*stat)); cond_resched(); +retry: while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; @@ -1426,6 +1488,16 @@ static unsigned int shrink_page_list(struct list_head *page_list, ; /* try to reclaim the page below */ } + /* + * Before reclaiming the page, try to relocate + * its contents to another node. + */ + if (do_demote_pass && migrate_demote_page_ok(page)) { + list_add(&page->lru, &demote_pages); + unlock_page(page); + continue; + } + /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. @@ -1676,6 +1748,17 @@ static unsigned int shrink_page_list(struct list_head *page_list, list_add(&page->lru, &ret_pages); VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + /* 'page_list' is always empty here */ + + /* Migrate pages selected for demotion */ + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); + /* Pages that could not be demoted are still in @demote_pages */ + if (!list_empty(&demote_pages)) { + /* Pages which failed to demoted go back on @page_list for retry: */ + list_splice_init(&demote_pages, page_list); + do_demote_pass = false; + goto retry; + } pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; From patchwork Fri Jun 18 06:15:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 837E5C49EA4 for ; Fri, 18 Jun 2021 06:16:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 379E861154 for ; Fri, 18 Jun 2021 06:16:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 379E861154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C84546B0078; Fri, 18 Jun 2021 02:16:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0BCA6B007B; Fri, 18 Jun 2021 02:16:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAB276B007D; Fri, 18 Jun 2021 02:16:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 6F7436B0078 for ; Fri, 18 Jun 2021 02:16:25 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 19A28180AD817 for ; Fri, 18 Jun 2021 06:16:25 +0000 (UTC) X-FDA: 78265835130.29.CF053CE Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id 31CAFA0009EE for ; Fri, 18 Jun 2021 06:16:24 +0000 (UTC) IronPort-SDR: FiKeN3GhDl2MX3xg4UB5IerrSuIgaPOE7oP3kacyFLWef5NLAZnAP0RQtwyDQ19wh++/9lTahx PD2Twko/Rf8Q== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027479" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027479" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:23 -0700 IronPort-SDR: IUjkEwm0XElNDp6FrclSjaclAFLl32enZKn+q4lFKRahTklE3wGaTOj+hfzFKzAcun2pHt+Kis 2USYtyIb90gQ== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573673" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:20 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Yang Shi , Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 06/10] mm/vmscan: add page demotion counter Date: Fri, 18 Jun 2021 14:15:33 +0800 Message-Id: <20210618061537.434999-7-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 31CAFA0009EE X-Stat-Signature: 8c38cyood4i9t858fy33ceenammd5owo X-HE-Tag: 1623996984-201462 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Account the number of demoted pages. Add pgdemote_kswapd and pgdemote_direct VM counters showed in /proc/vmstat. [ daveh: - __count_vm_events() a bit, and made them look at the THP size directly rather than getting data from migrate_pages() ] Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes since 202010: * remove unused scan-control 'demoted' field --- include/linux/vm_event_item.h | 2 ++ mm/vmscan.c | 5 +++++ mm/vmstat.c | 2 ++ 3 files changed, 9 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index ae0dd1948c2b..a185cc75ff52 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -33,6 +33,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGREUSE, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGDEMOTE_KSWAPD, + PGDEMOTE_DIRECT, PGSCAN_KSWAPD, PGSCAN_DIRECT, PGSCAN_DIRECT_THROTTLE, diff --git a/mm/vmscan.c b/mm/vmscan.c index ddda32031f0c..7d5c7216a4b7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1317,6 +1317,11 @@ static unsigned int demote_page_list(struct list_head *demote_pages, target_nid, MIGRATE_ASYNC, MR_DEMOTION, &nr_succeeded); + if (current_is_kswapd()) + __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded); + else + __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded); + return nr_succeeded; } diff --git a/mm/vmstat.c b/mm/vmstat.c index cccee36b289c..a51fbdcadff5 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1259,6 +1259,8 @@ const char * const vmstat_text[] = { "pgreuse", "pgsteal_kswapd", "pgsteal_direct", + "pgdemote_kswapd", + "pgdemote_direct", "pgscan_kswapd", "pgscan_direct", "pgscan_direct_throttle", From patchwork Fri Jun 18 06:15:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4762C49EA3 for ; Fri, 18 Jun 2021 06:16:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 81ED161154 for ; Fri, 18 Jun 2021 06:16:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 81ED161154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2480F6B007B; Fri, 18 Jun 2021 02:16:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D2126B007D; Fri, 18 Jun 2021 02:16:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0265A6B007E; Fri, 18 Jun 2021 02:16:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id BFE9E6B007B for ; Fri, 18 Jun 2021 02:16:28 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 654F66135 for ; Fri, 18 Jun 2021 06:16:28 +0000 (UTC) X-FDA: 78265835256.25.8BEE13A Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id C00ADA00014E for ; Fri, 18 Jun 2021 06:16:27 +0000 (UTC) IronPort-SDR: lHjE7jBHhKSaAzJDA3ccf6/Pvsa6rUo/wqIlpm9P8IVOuXTf1NsrNuzqVBBr7OKHPuVRWouhuy JB+BYp8Y1BrQ== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027488" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027488" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:26 -0700 IronPort-SDR: yj28GsLYu6rVPQELWlxho+EnGSfX2Apqpm9y9CN1rFnZwmL1qUTDsVNfV5qPVq24GJw2fIcSRU B23rWN8exsdg== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573695" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:23 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Yang Shi , Greg Thelen , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 07/10] mm/vmscan: add helper for querying ability to age anonymous pages Date: Fri, 18 Jun 2021 14:15:34 +0800 Message-Id: <20210618061537.434999-8-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C00ADA00014E X-Stat-Signature: 839dzotyywjh5kj9k7ax17ub5t8npoxo X-HE-Tag: 1623996987-857823 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Anonymous pages are kept on their own LRU(s). These lists could theoretically always be scanned and maintained. But, without swap, there is currently nothing the kernel can *do* with the results of a scanned, sorted LRU for anonymous pages. A check for '!total_swap_pages' currently serves as a valid check as to whether anonymous LRUs should be maintained. However, another method will be added shortly: page demotion. Abstract out the 'total_swap_pages' checks into a helper, give it a logically significant name, and check for the possibility of page demotion. Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Reviewed-by: Greg Thelen Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- mm/vmscan.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7d5c7216a4b7..8654cec65522 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2706,6 +2706,26 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, } } +/* + * Anonymous LRU management is a waste if there is + * ultimately no way to reclaim the memory. + */ +bool anon_should_be_aged(struct lruvec *lruvec) +{ + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + /* Aging the anon LRU is valuable if swap is present: */ + if (total_swap_pages > 0) + return true; + + /* Also valuable if anon pages can be demoted: */ + if (next_demotion_node(pgdat->node_id) >= 0) + return true; + + /* No way to reclaim anon pages. Should not age anon LRUs: */ + return false; +} + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2815,7 +2835,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (anon_should_be_aged(lruvec) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -3644,10 +3665,11 @@ static void age_active_anon(struct pglist_data *pgdat, struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + + if (!anon_should_be_aged(lruvec)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; From patchwork Fri Jun 18 06:15:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9731CC49EA3 for ; Fri, 18 Jun 2021 06:16:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 476D76120A for ; Fri, 18 Jun 2021 06:16:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 476D76120A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DB9C36B007D; Fri, 18 Jun 2021 02:16:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D43D56B007E; Fri, 18 Jun 2021 02:16:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6DD46B0080; Fri, 18 Jun 2021 02:16:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 7F8A16B007D for ; Fri, 18 Jun 2021 02:16:31 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 20DA0BBF9 for ; Fri, 18 Jun 2021 06:16:31 +0000 (UTC) X-FDA: 78265835382.17.F1FD4D8 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id 777F8A000397 for ; Fri, 18 Jun 2021 06:16:30 +0000 (UTC) IronPort-SDR: C+fC5a8LfYsVf4AzsIZh673kbKwrjZtgnA3fbaZPCHFNyfSALYwPMmVJJHbSi7UMZ+etvLGhV4 nXM23p0ffOYA== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027499" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027499" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:30 -0700 IronPort-SDR: mrsrbjnulGResonwwRAnorHqJvjiY+JLvv3kupci1kC/8rUC8uavMvKXythzg0Q+B3ODTUMlpC 4xdFSkDDd8YA== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573705" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:27 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Keith Busch , Dave Hansen , "Huang, Ying" , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 08/10] mm/vmscan: Consider anonymous pages without swap Date: Fri, 18 Jun 2021 14:15:35 +0800 Message-Id: <20210618061537.434999-9-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 777F8A000397 X-Stat-Signature: kunsazo4pp6b6ic7b9g8mhazmr66f5un X-HE-Tag: 1623996990-472642 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Keith Busch Reclaim anonymous pages if a migration path is available now that demotion provides a non-swap recourse for reclaiming anon pages. Note that this check is subtly different from the anon_should_be_aged() checks. This mechanism checks whether a specific page in a specific context *can* actually be reclaimed, given current swap space and cgroup limits anon_should_be_aged() is a much simpler and more preliminary check which just says whether there is a possibility of future reclaim. Cc: Keith Busch Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- Changes from Dave 10/2020: * remove 'total_swap_pages' modification Changes from Dave 06/2020: * rename reclaim_anon_pages()->can_reclaim_anon_pages() Note: Keith's Intel SoB is commented out because he is no longer at Intel and his @intel.com mail will bounce. --- mm/vmscan.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8654cec65522..bf52329f29dd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -516,6 +516,34 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker, return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); } +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, + int node_id) +{ + if (memcg == NULL) { + /* + * For non-memcg reclaim, is there + * space in any swap device? + */ + if (get_nr_swap_pages() > 0) + return true; + } else { + /* Is the memcg below its swap limit? */ + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) + return true; + } + + /* + * The page can not be swapped. + * + * Can it be reclaimed from this node via demotion? + */ + if (next_demotion_node(node_id) >= 0) + return true; + + /* No way to reclaim anon pages */ + return false; +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -527,7 +555,7 @@ unsigned long zone_reclaimable_pages(struct zone *zone) nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -2521,6 +2549,7 @@ enum scan_balance { static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long *nr) { + struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct mem_cgroup *memcg = lruvec_memcg(lruvec); unsigned long anon_cost, file_cost, total_cost; int swappiness = mem_cgroup_swappiness(memcg); @@ -2531,7 +2560,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { scan_balance = SCAN_FILE; goto out; } @@ -2906,7 +2935,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, pgdat->node_id)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Fri Jun 18 06:15:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C24A5C49361 for ; Fri, 18 Jun 2021 06:16:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7787A6112D for ; Fri, 18 Jun 2021 06:16:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7787A6112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 135BA6B007E; Fri, 18 Jun 2021 02:16:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BF516B0080; Fri, 18 Jun 2021 02:16:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2CD36B0081; Fri, 18 Jun 2021 02:16:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id ABBEB6B007E for ; Fri, 18 Jun 2021 02:16:34 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 53157BBF9 for ; Fri, 18 Jun 2021 06:16:34 +0000 (UTC) X-FDA: 78265835508.03.6D20157 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id A8359A00014E for ; Fri, 18 Jun 2021 06:16:33 +0000 (UTC) IronPort-SDR: il6wQ+mTwO9NjGbalHUxRD1sXkoo/MlHTZxr/wd9VnqcpcbUIK5hKxIK6IQIx8qLzPBtoV+zEh b0Se2EH+1eHg== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027504" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027504" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:33 -0700 IronPort-SDR: Mp4PPuZxQUF1g9hn/6wpr51YKi9u7JaITeNgfSupFvUomqbc8vW7nGxVXVz1vEtyE39fHyehPN yz1WrL4GUqgQ== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573726" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:30 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , "Huang, Ying" , Yang Shi , Yang Shi , Michal Hocko , Wei Xu , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 09/10] mm/vmscan: never demote for memcg reclaim Date: Fri, 18 Jun 2021 14:15:36 +0800 Message-Id: <20210618061537.434999-10-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A8359A00014E X-Stat-Signature: 9i7c388w5hmrfi74kwfkeb3cc93fjref X-HE-Tag: 1623996993-415798 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Global reclaim aims to reduce the amount of memory used on a given node or set of nodes. Migrating pages to another node serves this purpose. memcg reclaim is different. Its goal is to reduce the total memory consumption of the entire memcg, across all nodes. Migration does not assist memcg reclaim because it just moves page contents between nodes rather than actually reducing memory consumption. Signed-off-by: Dave Hansen Signed-off-by: "Huang, Ying" Suggested-by: Yang Shi Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Wei Xu Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- mm/vmscan.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bf52329f29dd..13a60426c12f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -517,7 +517,8 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker, } static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, - int node_id) + int node_id, + struct scan_control *sc) { if (memcg == NULL) { /* @@ -555,7 +556,7 @@ unsigned long zone_reclaimable_pages(struct zone *zone) nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL)) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -1260,7 +1261,8 @@ static enum page_references page_check_references(struct page *page, return PAGEREF_RECLAIM; } -static bool migrate_demote_page_ok(struct page *page) +static bool migrate_demote_page_ok(struct page *page, + struct scan_control *sc) { int next_nid = next_demotion_node(page_to_nid(page)); @@ -1268,6 +1270,10 @@ static bool migrate_demote_page_ok(struct page *page) VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + /* It is pointless to do demotion in memcg reclaim */ + if (cgroup_reclaim(sc)) + return false; + if (next_nid == NUMA_NO_NODE) return false; if (PageTransHuge(page) && !thp_migration_supported()) @@ -1525,7 +1531,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, * Before reclaiming the page, try to relocate * its contents to another node. */ - if (do_demote_pass && migrate_demote_page_ok(page)) { + if (do_demote_pass && migrate_demote_page_ok(page, sc)) { list_add(&page->lru, &demote_pages); unlock_page(page); continue; @@ -2560,7 +2566,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) { scan_balance = SCAN_FILE; goto out; } @@ -2935,7 +2941,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, pgdat->node_id)) + if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Fri Jun 18 06:15:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12330249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7705C49361 for ; Fri, 18 Jun 2021 06:16:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5C3F760FE6 for ; Fri, 18 Jun 2021 06:16:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C3F760FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF17B6B0080; Fri, 18 Jun 2021 02:16:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7A0C6B0081; Fri, 18 Jun 2021 02:16:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7F856B0082; Fri, 18 Jun 2021 02:16:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 91A676B0080 for ; Fri, 18 Jun 2021 02:16:37 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 363FA180AD817 for ; Fri, 18 Jun 2021 06:16:37 +0000 (UTC) X-FDA: 78265835634.25.594A950 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf23.hostedemail.com (Postfix) with ESMTP id 93BE2A000382 for ; Fri, 18 Jun 2021 06:16:36 +0000 (UTC) IronPort-SDR: YIENzem0Y8O/EmPMnOyQLfndStJHVdze56IR5qHz32bZw07FPP0BHNxjNdgeapjIQfL3MR3ybU WgKAFTdDWcEA== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="228027508" X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="228027508" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:36 -0700 IronPort-SDR: eNX/Lk6huXiQb7hk/oFYMKGx4B1R4t4cHMt8AvL/Ch+OvNAEZ9wHYNXCRMas5Jk6kMYx6XS38a +FIbLgAZVzAw== X-IronPort-AV: E=Sophos;i="5.83,283,1616482800"; d="scan'208";a="485573737" Received: from mzhou6-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.155]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 23:16:33 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Huang Ying , Dave Hansen , Michal Hocko , Wei Xu , Yang Shi , David Rientjes , Dan Williams , David Hildenbrand , osalvador Subject: [PATCH -V8 10/10] mm/migrate: add sysfs interface to enable reclaim migration Date: Fri, 18 Jun 2021 14:15:37 +0800 Message-Id: <20210618061537.434999-11-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210618061537.434999-1-ying.huang@intel.com> References: <20210618061537.434999-1-ying.huang@intel.com> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=none; spf=none (imf23.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=ying.huang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 93BE2A000382 X-Stat-Signature: 5jwwhu3wpffm3hmhafn9am35smeo383w X-HE-Tag: 1623996996-193908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some method is obviously needed to enable reclaim-based migration. Just like traditional autonuma, there will be some workloads that will benefit like workloads with more "static" configurations where hot pages stay hot and cold pages stay cold. If pages come and go from the hot and cold sets, the benefits of this approach will be more limited. The benefits are truly workload-based and *not* hardware-based. We do not believe that there is a viable threshold where certain hardware configurations should have this mechanism enabled while others do not. To be conservative, earlier work defaulted to disable reclaim- based migration and did not include a mechanism to enable it. This proposes add a new sysfs file /sys/kernel/mm/numa/demotion_enabled as a method to enable it. We are open to any alternative that allows end users to enable this mechanism or disable it if workload harm is detected (just like traditional autonuma). Once this is enabled page demotion may move data to a NUMA node that does not fall into the cpuset of the allocating process. This could be construed to violate the guarantees of cpusets. However, since this is an opt-in mechanism, the assumption is that anyone enabling it is content to relax the guarantees. Originally-by: Dave Hansen Signed-off-by: Huang Ying Cc: Michal Hocko Cc: Wei Xu Cc: Yang Shi Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Changes since 20200122: * Changelog material about relaxing cpuset constraints Changes since 20210304: * Add Documentation/ material about relaxing cpuset constraints Changes since 20210331: * Use sysfs interface separated from the zone_reclaim sysctl. --- .../ABI/testing/sysfs-kernel-mm-numa | 24 ++++++++ include/linux/mempolicy.h | 4 ++ mm/mempolicy.c | 61 +++++++++++++++++++ mm/vmscan.c | 6 +- 4 files changed, 93 insertions(+), 2 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-kernel-mm-numa diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa new file mode 100644 index 000000000000..77e559d4ed80 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa @@ -0,0 +1,24 @@ +What: /sys/kernel/mm/numa/ +Date: June 2021 +Contact: Linux memory management mailing list +Description: Interface for NUMA + +What: /sys/kernel/mm/numa/demotion_enabled +Date: June 2021 +Contact: Linux memory management mailing list +Description: Enable/disable demoting pages during reclaim + + Page migration during reclaim is intended for systems + with tiered memory configurations. These systems have + multiple types of memory with varied performance + characteristics instead of plain NUMA systems where + the same kind of memory is found at varied distances. + Allowing page migration during reclaim enables these + systems to migrate pages from fast tiers to slow tiers + when the fast tier is under pressure. This migration + is performed before swap. It may move data to a NUMA + node that does not fall into the cpuset of the + allocating process which might be construed to violate + the guarantees of cpusets. This should not be enabled + on systems which need strict cpuset location + guarantees. diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5f1c74df264d..78a736e76d5c 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -187,6 +187,8 @@ extern bool vma_migratable(struct vm_area_struct *vma); extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); extern void mpol_put_task_policy(struct task_struct *); +extern bool numa_demotion_enabled; + #else struct mempolicy {}; @@ -295,5 +297,7 @@ static inline nodemask_t *policy_nodemask_current(gfp_t gfp) { return NULL; } + +#define numa_demotion_enabled false #endif /* CONFIG_NUMA */ #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 25dceedbb884..f19d91368931 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -3060,3 +3060,64 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) p += scnprintf(p, buffer + maxlen - p, ":%*pbl", nodemask_pr_args(&nodes)); } + +bool numa_demotion_enabled = false; + +#ifdef CONFIG_SYSFS +static ssize_t numa_demotion_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%s\n", + numa_demotion_enabled? "true" : "false"); +} + +static ssize_t numa_demotion_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + if (!strncmp(buf, "true", 4) || !strncmp(buf, "1", 1)) + numa_demotion_enabled= true; + else if (!strncmp(buf, "false", 5) || !strncmp(buf, "0", 1)) + numa_demotion_enabled = false; + else + return -EINVAL; + + return count; +} + +static struct kobj_attribute numa_demotion_enabled_attr = + __ATTR(demotion_enabled, 0644, numa_demotion_enabled_show, + numa_demotion_enabled_store); + +static struct attribute *numa_attrs[] = { + &numa_demotion_enabled_attr.attr, + NULL, +}; + +static const struct attribute_group numa_attr_group = { + .attrs = numa_attrs, +}; + +static int __init numa_init_sysfs(void) +{ + int err; + struct kobject *numa_kobj; + + numa_kobj = kobject_create_and_add("numa", mm_kobj); + if (!numa_kobj) { + pr_err("failed to create numa kobject\n"); + return -ENOMEM; + } + err = sysfs_create_group(numa_kobj, &numa_attr_group); + if (err) { + pr_err("failed to register numa group\n"); + goto delete_obj; + } + return 0; + +delete_obj: + kobject_put(numa_kobj); + return err; +} +subsys_initcall(numa_init_sysfs); +#endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 13a60426c12f..3560a9ea7565 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1270,6 +1270,9 @@ static bool migrate_demote_page_ok(struct page *page, VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + if (!numa_demotion_enabled) + return false; + /* It is pointless to do demotion in memcg reclaim */ if (cgroup_reclaim(sc)) return false; @@ -1279,8 +1282,7 @@ static bool migrate_demote_page_ok(struct page *page, if (PageTransHuge(page) && !thp_migration_supported()) return false; - // FIXME: actually enable this later in the series - return false; + return true; } /* Check if a page is dirty or under writeback */