From patchwork Thu Mar 4 23:59:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D061FC43381 for ; Fri, 5 Mar 2021 00:00:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 815FA65007 for ; Fri, 5 Mar 2021 00:00:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 815FA65007 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F58C6B0008; Thu, 4 Mar 2021 19:00:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B8F86B000A; Thu, 4 Mar 2021 19:00:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 249BE6B000C; Thu, 4 Mar 2021 19:00:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 01B2A6B0008 for ; Thu, 4 Mar 2021 19:00:49 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BFA9A824999B for ; Fri, 5 Mar 2021 00:00:49 +0000 (UTC) X-FDA: 77883864618.25.0DAFB46 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf12.hostedemail.com (Postfix) with ESMTP id 6F8A62388 for ; Fri, 5 Mar 2021 00:00:47 +0000 (UTC) IronPort-SDR: 0fTnLeqjdHuD0XyB4jzlX3na7Ix1EfwG4rmxIo1hpANdHionCy0VwrifJlbcE84L4MVyP99JEz 1VkJ3Vnyek8g== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="272534022" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="272534022" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:43 -0800 IronPort-SDR: 59t9sYMm+sfUtqu57XK2tUdmjZALZ5z0CGslJWlBGy9fiQJWEwzPTQ5pCOqtEdEM+KzkZm6mM4 97qFnM9DYYlA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="407034731" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga007.jf.intel.com with ESMTP; 04 Mar 2021 16:00:43 -0800 Subject: [PATCH 01/10] mm/numa: node demotion data structure and lookup To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 15:59:51 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210304235951.271553C2@viggo.jf.intel.com> X-Stat-Signature: uztknbek5dwsytnjuxfj19aqta4mwiei X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6F8A62388 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902447-206566 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Prepare for the kernel to auto-migrate pages to other memory nodes with a user defined node migration table. This allows creating single migration target for each NUMA node to enable the kernel to do NUMA page migrations instead of simply reclaiming colder pages. A node with no target is a "terminal node", so reclaim acts normally there. The migration target does not fundamentally _need_ to be a single node, but this implementation starts there to limit complexity. If you consider the migration path as a graph, cycles (loops) in the graph are disallowed. This avoids wasting resources by constantly migrating (A->B, B->A, A->B ...). The expectation is that cycles will never be allowed. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi --- changes since 20200122: * Make node_demotion[] __read_mostly changes in July 2020: - Remove loop from next_demotion_node() and get_online_mems(). This means that the node returned by next_demotion_node() might now be offline, but the worst case is that the allocation fails. That's fine since it is transient. --- b/mm/migrate.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff -puN mm/migrate.c~0006-node-Define-and-export-memory-migration-path mm/migrate.c --- a/mm/migrate.c~0006-node-Define-and-export-memory-migration-path 2021-03-04 15:35:51.353806441 -0800 +++ b/mm/migrate.c 2021-03-04 15:35:51.359806441 -0800 @@ -1157,6 +1157,23 @@ out: return rc; } +static int node_demotion[MAX_NUMNODES] __read_mostly = + {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; + +/** + * next_demotion_node() - Get the next node in the demotion path + * @node: The starting node to lookup the next node + * + * @returns: node id for next memory node in the demotion path hierarchy + * from @node; NUMA_NO_NODE if @node is terminal. This does not keep + * @node online or guarantee that it *continues* to be the next demotion + * target. + */ +int next_demotion_node(int node) +{ + return node_demotion[node]; +} + /* * Obtain the lock on page, remove all ptes and migrate the page * to the newly allocated page in newpage. From patchwork Thu Mar 4 23:59:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8400FC433DB for ; Fri, 5 Mar 2021 00:00:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 20FF364FFF for ; Fri, 5 Mar 2021 00:00:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20FF364FFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B6E456B0007; Thu, 4 Mar 2021 19:00:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF7B86B0008; Thu, 4 Mar 2021 19:00:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 972BA6B000A; Thu, 4 Mar 2021 19:00:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 7400A6B0007 for ; Thu, 4 Mar 2021 19:00:49 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2D6A534A3 for ; Fri, 5 Mar 2021 00:00:49 +0000 (UTC) X-FDA: 77883864618.29.24BC07E Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf28.hostedemail.com (Postfix) with ESMTP id 9261B20053FF for ; Fri, 5 Mar 2021 00:00:48 +0000 (UTC) IronPort-SDR: PpQGylFZOjt/iykRQF/eR3KzoePKjELxRdFj2laDp/MoPd+p+l+2kgoVcLwakQvyrudlDi92e8 GSxANanb08MA== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="174646691" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="174646691" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:46 -0800 IronPort-SDR: FMZzjZJtkQEOjxRM2+7PRDp9hFWs2esIF1k3FsvP9jehQEtRMX5qIzLOItYHLASnT8yqfVpyN3 9uO05uH/HPcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="374728311" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 16:00:45 -0800 Subject: [PATCH 02/10] mm/numa: automatically generate node migration order To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 15:59:52 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210304235952.15D0CD27@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9261B20053FF X-Stat-Signature: xosg9m73iexxfz81s9gxpix4kmagpu9y Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mga02.intel.com; client-ip=134.134.136.20 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902448-787173 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen When memory fills up on a node, memory contents can be automatically migrated to another node. The biggest problems are knowing when to migrate and to where the migration should be targeted. The most straightforward way to generate the "to where" list would be to follow the page allocator fallback lists. Those lists already tell us if memory is full where to look next. It would also be logical to move memory in that order. But, the allocator fallback lists have a fatal flaw: most nodes appear in all the lists. This would potentially lead to migration cycles (A->B, B->A, A->B, ...). Instead of using the allocator fallback lists directly, keep a separate node migration ordering. But, reuse the same data used to generate page allocator fallback in the first place: find_next_best_node(). This means that the firmware data used to populate node distances essentially dictates the ordering for now. It should also be architecture-neutral since all NUMA architectures have a working find_next_best_node(). The protocol for node_demotion[] access and writing is not standard. It has no specific locking and is intended to be read locklessly. Readers must take care to avoid observing changes that appear incoherent. This was done so that node_demotion[] locking has no chance of becoming a bottleneck on large systems with lots of CPUs in direct reclaim. This code is unused for now. It will be called later in the series. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- changes from 20200122: * Add big node_demotion[] comment --- b/mm/internal.h | 5 + b/mm/migrate.c | 174 +++++++++++++++++++++++++++++++++++++++++++++++++++++- b/mm/page_alloc.c | 4 - 3 files changed, 180 insertions(+), 3 deletions(-) diff -puN mm/internal.h~auto-setup-default-migration-path-from-firmware mm/internal.h --- a/mm/internal.h~auto-setup-default-migration-path-from-firmware 2021-03-04 15:35:52.407806439 -0800 +++ b/mm/internal.h 2021-03-04 15:35:52.426806439 -0800 @@ -520,12 +520,17 @@ static inline void mminit_validate_memmo #ifdef CONFIG_NUMA extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); +extern int find_next_best_node(int node, nodemask_t *used_node_mask); #else static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, unsigned int order) { return NODE_RECLAIM_NOSCAN; } +static inline int find_next_best_node(int node, nodemask_t *used_node_mask) +{ + return NUMA_NO_NODE; +} #endif extern int hwpoison_filter(struct page *p); diff -puN mm/migrate.c~auto-setup-default-migration-path-from-firmware mm/migrate.c --- a/mm/migrate.c~auto-setup-default-migration-path-from-firmware 2021-03-04 15:35:52.409806439 -0800 +++ b/mm/migrate.c 2021-03-04 15:35:52.427806439 -0800 @@ -1157,6 +1157,44 @@ out: return rc; } + +/* + * node_demotion[] example: + * + * Consider a system with two sockets. Each socket has + * three classes of memory attached: fast, medium and slow. + * Each memory class is placed in its own NUMA node. The + * CPUs are placed in the node with the "fast" memory. The + * 6 NUMA nodes (0-5) might be split among the sockets like + * this: + * + * Socket A: 0, 1, 2 + * Socket B: 3, 4, 5 + * + * When Node 0 fills up, its memory should be migrated to + * Node 1. When Node 1 fills up, it should be migrated to + * Node 2. The migration path start on the nodes with the + * processors (since allocations default to this node) and + * fast memory, progress through medium and end with the + * slow memory: + * + * 0 -> 1 -> 2 -> stop + * 3 -> 4 -> 5 -> stop + * + * This is represented in the node_demotion[] like this: + * + * { 1, // Node 0 migrates to 1 + * 2, // Node 1 migrates to 2 + * -1, // Node 2 does not migrate + * 4, // Node 3 migrates to 1 + * 5, // Node 4 migrates to 2 + * -1} // Node 5 does not migrate + */ + +/* + * Writes to this array occur without locking. READ_ONCE() + * is recommended for readers to ensure consistent reads. + */ static int node_demotion[MAX_NUMNODES] __read_mostly = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; @@ -1171,7 +1209,13 @@ static int node_demotion[MAX_NUMNODES] _ */ int next_demotion_node(int node) { - return node_demotion[node]; + /* + * node_demotion[] is updated without excluding + * this function from running. READ_ONCE() avoids + * reading multiple, inconsistent 'node' values + * during an update. + */ + return READ_ONCE(node_demotion[node]); } /* @@ -3175,3 +3219,131 @@ void migrate_vma_finalize(struct migrate } EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ + +/* Disable reclaim-based migration. */ +static void disable_all_migrate_targets(void) +{ + int node; + + for_each_online_node(node) + node_demotion[node] = NUMA_NO_NODE; +} + +/* + * Find an automatic demotion target for 'node'. + * Failing here is OK. It might just indicate + * being at the end of a chain. + */ +static int establish_migrate_target(int node, nodemask_t *used) +{ + int migration_target; + + /* + * Can not set a migration target on a + * node with it already set. + * + * No need for READ_ONCE() here since this + * in the write path for node_demotion[]. + * This should be the only thread writing. + */ + if (node_demotion[node] != NUMA_NO_NODE) + return NUMA_NO_NODE; + + migration_target = find_next_best_node(node, used); + if (migration_target == NUMA_NO_NODE) + return NUMA_NO_NODE; + + node_demotion[node] = migration_target; + + return migration_target; +} + +/* + * When memory fills up on a node, memory contents can be + * automatically migrated to another node instead of + * discarded at reclaim. + * + * Establish a "migration path" which will start at nodes + * with CPUs and will follow the priorities used to build the + * page allocator zonelists. + * + * The difference here is that cycles must be avoided. If + * node0 migrates to node1, then neither node1, nor anything + * node1 migrates to can migrate to node0. + * + * This function can run simultaneously with readers of + * node_demotion[]. However, it can not run simultaneously + * with itself. Exclusion is provided by memory hotplug events + * being single-threaded. + */ +static void __set_migration_target_nodes(void) +{ + nodemask_t next_pass = NODE_MASK_NONE; + nodemask_t this_pass = NODE_MASK_NONE; + nodemask_t used_targets = NODE_MASK_NONE; + int node; + + /* + * Avoid any oddities like cycles that could occur + * from changes in the topology. This will leave + * a momentary gap when migration is disabled. + */ + disable_all_migrate_targets(); + + /* + * Ensure that the "disable" is visible across the system. + * Readers will see either a combination of before+disable + * state or disable+after. They will never see before and + * after state together. + * + * The before+after state together might have cycles and + * could cause readers to do things like loop until this + * function finishes. This ensures they can only see a + * single "bad" read and would, for instance, only loop + * once. + */ + smp_wmb(); + + /* + * Allocations go close to CPUs, first. Assume that + * the migration path starts at the nodes with CPUs. + */ + next_pass = node_states[N_CPU]; +again: + this_pass = next_pass; + next_pass = NODE_MASK_NONE; + /* + * To avoid cycles in the migration "graph", ensure + * that migration sources are not future targets by + * setting them in 'used_targets'. Do this only + * once per pass so that multiple source nodes can + * share a target node. + * + * 'used_targets' will become unavailable in future + * passes. This limits some opportunities for + * multiple source nodes to share a destination. + */ + nodes_or(used_targets, used_targets, this_pass); + for_each_node_mask(node, this_pass) { + int target_node = establish_migrate_target(node, &used_targets); + + if (target_node == NUMA_NO_NODE) + continue; + + /* Visit targets from this pass in the next pass: */ + node_set(target_node, next_pass); + } + /* Is another pass necessary? */ + if (!nodes_empty(next_pass)) + goto again; +} + +/* + * For callers that do not hold get_online_mems() already. + */ +static void set_migration_target_nodes(void) +{ + get_online_mems(); + __set_migration_target_nodes(); + put_online_mems(); +} diff -puN mm/page_alloc.c~auto-setup-default-migration-path-from-firmware mm/page_alloc.c --- a/mm/page_alloc.c~auto-setup-default-migration-path-from-firmware 2021-03-04 15:35:52.422806439 -0800 +++ b/mm/page_alloc.c 2021-03-04 15:35:52.429806439 -0800 @@ -3916,7 +3916,7 @@ retry: if (alloc_flags & ALLOC_NO_WATERMARKS) goto try_this_zone; - if (!node_reclaim_enabled() || + if (node_reclaim_mode == 0 || !zone_allows_reclaim(ac->preferred_zoneref->zone, zone)) continue; @@ -5773,7 +5773,7 @@ static int node_load[MAX_NUMNODES]; * * Return: node id of the found node or %NUMA_NO_NODE if no node is found. */ -static int find_next_best_node(int node, nodemask_t *used_node_mask) +int find_next_best_node(int node, nodemask_t *used_node_mask) { int n, val; int min_val = INT_MAX; From patchwork Thu Mar 4 23:59:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A1E2C43381 for ; Fri, 5 Mar 2021 00:00:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FC5064FFD for ; Fri, 5 Mar 2021 00:00:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FC5064FFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CF9F76B000C; Thu, 4 Mar 2021 19:00:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA5E16B000D; Thu, 4 Mar 2021 19:00:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4C8A6B000E; Thu, 4 Mar 2021 19:00:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 7E0876B000D for ; Thu, 4 Mar 2021 19:00:51 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3D7FF34A3 for ; Fri, 5 Mar 2021 00:00:51 +0000 (UTC) X-FDA: 77883864702.10.BB31913 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf02.hostedemail.com (Postfix) with ESMTP id C20034080F67 for ; Fri, 5 Mar 2021 00:00:46 +0000 (UTC) IronPort-SDR: uq2VJI3xDeN7g8ESXgtQYuU9K0CfSBbMzKaawVaoi5Q1H7dVV5AGTEYuWVG9fSkh4TyYIAnNn3 UUzJWvnhPD8A== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="251569447" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="251569447" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:48 -0800 IronPort-SDR: VSXQlkJkR9wZ3PrZsc7X+abuuv8emgPnvCJ+ZR+Gyjuln5IhX9MYHDvAqMYuNBDNSa6FX9bhPg JYPL0wItkhlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="400982812" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga008.fm.intel.com with ESMTP; 04 Mar 2021 16:00:47 -0800 Subject: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 15:59:55 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210304235955.05514241@viggo.jf.intel.com> X-Stat-Signature: 3xwdxjh6dkfgxhjc7m6do4jphc7z5r1x X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C20034080F67 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=mga07.intel.com; client-ip=134.134.136.100 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902446-336601 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Reclaim-based migration is attempting to optimize data placement in memory based on the system topology. If the system changes, so must the migration ordering. The implementation is conceptually simple and entirely unoptimized. On any memory or CPU hotplug events, assume that a node was added or removed and recalculate all migration targets. This ensures that the node_demotion[] array is always ready to be used in case the new reclaim mode is enabled. This recalculation is far from optimal, most glaringly that it does not even attempt to figure out the hotplug event would have some *actual* effect on the demotion order. But, given the expected paucity of hotplug events, this should be fine. === What does RCU provide? === Imaginge a simple loop which walks down the demotion path looking for the last node: terminal_node = start_node; while (node_demotion[terminal_node] != NUMA_NO_NODE) { terminal_node = node_demotion[terminal_node]; } The initial values are: node_demotion[0] = 1; node_demotion[1] = NUMA_NO_NODE; and are updated to: node_demotion[0] = NUMA_NO_NODE; node_demotion[1] = 0; What guarantees that the loop did not observe: node_demotion[0] = 1; node_demotion[1] = 0; and would loop forever? With RCU, a rcu_read_lock/unlock() can be placed around the loop. Since the write side does a synchronize_rcu(), the loop that observed the old contents is known to be complete after the synchronize_rcu() has completed. RCU, combined with disable_all_migrate_targets(), ensures that the old migration state is not visible by the time __set_migration_target_nodes() is called. === What does READ_ONCE() provide? === READ_ONCE() forbids the compiler from merging or reordering successive reads of node_demotion[]. This ensures that any updates are *eventually* observed. Consider the above loop again. The compiler could theoretically read the entirety of node_demotion[] into local storage (registers) and never go back to memory, and *permanently* observe bad values for node_demotion[]. Note: RCU does not provide any universal compiler-ordering guarantees: https://lore.kernel.org/lkml/20150921204327.GH4029@linux.vnet.ibm.com/ Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/migrate.c | 159 +++++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 137 insertions(+), 22 deletions(-) diff -puN mm/migrate.c~enable-numa-demotion mm/migrate.c --- a/mm/migrate.c~enable-numa-demotion 2021-03-04 15:35:53.670806436 -0800 +++ b/mm/migrate.c 2021-03-04 15:35:53.677806436 -0800 @@ -49,6 +49,7 @@ #include #include #include +#include #include @@ -1192,8 +1193,12 @@ out: */ /* - * Writes to this array occur without locking. READ_ONCE() - * is recommended for readers to ensure consistent reads. + * Writes to this array occur without locking. Cycles are + * not allowed: Node X demotes to Y which demotes to X... + * + * If multiple reads are performed, a single rcu_read_lock() + * must be held over all reads to ensure that no cycles are + * observed. */ static int node_demotion[MAX_NUMNODES] __read_mostly = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; @@ -1209,13 +1214,22 @@ static int node_demotion[MAX_NUMNODES] _ */ int next_demotion_node(int node) { + int target; + /* - * node_demotion[] is updated without excluding - * this function from running. READ_ONCE() avoids - * reading multiple, inconsistent 'node' values - * during an update. + * node_demotion[] is updated without excluding this + * function from running. RCU doesn't provide any + * compiler barriers, so the READ_ONCE() is required + * to avoid compiler reordering or read merging. + * + * Make sure to use RCU over entire code blocks if + * node_demotion[] reads need to be consistent. */ - return READ_ONCE(node_demotion[node]); + rcu_read_lock(); + target = READ_ONCE(node_demotion[node]); + rcu_read_unlock(); + + return target; } /* @@ -3220,8 +3234,9 @@ void migrate_vma_finalize(struct migrate EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ +#if defined(CONFIG_MEMORY_HOTPLUG) /* Disable reclaim-based migration. */ -static void disable_all_migrate_targets(void) +static void __disable_all_migrate_targets(void) { int node; @@ -3229,6 +3244,25 @@ static void disable_all_migrate_targets( node_demotion[node] = NUMA_NO_NODE; } +static void disable_all_migrate_targets(void) +{ + __disable_all_migrate_targets(); + + /* + * Ensure that the "disable" is visible across the system. + * Readers will see either a combination of before+disable + * state or disable+after. They will never see before and + * after state together. + * + * The before+after state together might have cycles and + * could cause readers to do things like loop until this + * function finishes. This ensures they can only see a + * single "bad" read and would, for instance, only loop + * once. + */ + synchronize_rcu(); +} + /* * Find an automatic demotion target for 'node'. * Failing here is OK. It might just indicate @@ -3291,20 +3325,6 @@ static void __set_migration_target_nodes disable_all_migrate_targets(); /* - * Ensure that the "disable" is visible across the system. - * Readers will see either a combination of before+disable - * state or disable+after. They will never see before and - * after state together. - * - * The before+after state together might have cycles and - * could cause readers to do things like loop until this - * function finishes. This ensures they can only see a - * single "bad" read and would, for instance, only loop - * once. - */ - smp_wmb(); - - /* * Allocations go close to CPUs, first. Assume that * the migration path starts at the nodes with CPUs. */ @@ -3347,3 +3367,98 @@ static void set_migration_target_nodes(v __set_migration_target_nodes(); put_online_mems(); } + +/* + * React to hotplug events that might affect the migration targets + * like events that online or offline NUMA nodes. + * + * The ordering is also currently dependent on which nodes have + * CPUs. That means we need CPU on/offline notification too. + */ +static int migration_online_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +static int migration_offline_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +/* + * This leaves migrate-on-reclaim transiently disabled between + * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs + * whether reclaim-based migration is enabled or not, which + * ensures that the user can turn reclaim-based migration at + * any time without needing to recalculate migration targets. + * + * These callbacks already hold get_online_mems(). That is why + * __set_migration_target_nodes() can be used as opposed to + * set_migration_target_nodes(). + */ +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, + unsigned long action, void *arg) +{ + switch (action) { + case MEM_GOING_OFFLINE: + /* + * Make sure there are not transient states where + * an offline node is a migration target. This + * will leave migration disabled until the offline + * completes and the MEM_OFFLINE case below runs. + */ + disable_all_migrate_targets(); + + /* + * Ensure the disable operation is globally visible. + * This avoids readers ever being able to + * simultaneously observe the old (pre-hotplug) and + * new (post-hotplug) migration targets. + */ + synchronize_rcu(); + break; + case MEM_OFFLINE: + case MEM_ONLINE: + /* + * Recalculate the target nodes once the node + * reaches its final state (online or offline). + */ + __set_migration_target_nodes(); + break; + case MEM_CANCEL_OFFLINE: + /* + * MEM_GOING_OFFLINE disabled all the migration + * targets. Reenable them. + */ + __set_migration_target_nodes(); + break; + case MEM_GOING_ONLINE: + case MEM_CANCEL_ONLINE: + break; + } + + return notifier_from_errno(0); +} + +static int __init migrate_on_reclaim_init(void) +{ + int ret; + + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim", + migration_online_cpu, + migration_offline_cpu); + /* + * In the unlikely case that this fails, the automatic + * migration targets may become suboptimal for nodes + * where N_CPU changes. With such a small impact in a + * rare case, do not bother trying to do anything special. + */ + WARN_ON(ret < 0); + + hotplug_memory_notifier(migrate_on_reclaim_callback, 100); + return 0; +} +late_initcall(migrate_on_reclaim_init); +#endif /* CONFIG_MEMORY_HOTPLUG */ From patchwork Thu Mar 4 23:59:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38F37C433E0 for ; Fri, 5 Mar 2021 00:00:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C6DB365005 for ; Fri, 5 Mar 2021 00:00:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6DB365005 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C5A36B000A; Thu, 4 Mar 2021 19:00:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 750FA6B000C; Thu, 4 Mar 2021 19:00:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52D666B000D; Thu, 4 Mar 2021 19:00:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 350C06B000A for ; Thu, 4 Mar 2021 19:00:51 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EACC7180AD802 for ; Fri, 5 Mar 2021 00:00:50 +0000 (UTC) X-FDA: 77883864660.08.4CC28B7 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf12.hostedemail.com (Postfix) with ESMTP id 8D4B12BC1 for ; Fri, 5 Mar 2021 00:00:48 +0000 (UTC) IronPort-SDR: aw2FQhbHYomX6xpkH+1goJBOu5d7Mzz/PBapTxVSMRNWHmd3aITlAP6BhwpUZHoFajD9NLUI+w cOkjD0G/Eh0w== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="272534055" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="272534055" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:49 -0800 IronPort-SDR: DQtbCF6324PFvj9m1WJNElhXo8PnAjPl6zD7jhyj6NXya+KgiI3pbfNqjTD1P7+6UzPoHAS6fA kM8rnrjAbzKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="408039774" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga008.jf.intel.com with ESMTP; 04 Mar 2021 16:00:49 -0800 Subject: [PATCH 04/10] mm/migrate: make migrate_pages() return nr_succeeded To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 15:59:57 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210304235957.958C59F2@viggo.jf.intel.com> X-Stat-Signature: 8pnhzay5czk9dhi5d93653k1ixcrrayb X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8D4B12BC1 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902448-565973 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi The migrate_pages() returns the number of pages that were not migrated, or an error code. When returning an error code, there is no way to know how many pages were migrated or not migrated. In the following patch, migrate_pages() is used to demote pages to PMEM node, we need account how many pages are reclaimed (demoted) since page reclaim behavior depends on this. Add *nr_succeeded parameter to make migrate_pages() return how many pages are demoted successfully for all cases. Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi --- Changes since 20200122: * Fix migrate_pages() to manipulate nr_succeeded *value* rather than the pointer. --- b/include/linux/migrate.h | 7 ++++--- b/mm/compaction.c | 3 ++- b/mm/gup.c | 4 +++- b/mm/memory-failure.c | 4 +++- b/mm/memory_hotplug.c | 4 +++- b/mm/mempolicy.c | 8 ++++++-- b/mm/migrate.c | 19 +++++++++++-------- b/mm/page_alloc.c | 9 ++++++--- 8 files changed, 38 insertions(+), 20 deletions(-) diff -puN include/linux/migrate.h~migrate_pages-add-success-return include/linux/migrate.h --- a/include/linux/migrate.h~migrate_pages-add-success-return 2021-03-04 15:35:54.751806433 -0800 +++ b/include/linux/migrate.h 2021-03-04 15:35:54.811806433 -0800 @@ -40,7 +40,8 @@ extern int migrate_page(struct address_s struct page *newpage, struct page *page, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, - unsigned long private, enum migrate_mode mode, int reason); + unsigned long private, enum migrate_mode mode, int reason, + unsigned int *nr_succeeded); extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -57,8 +58,8 @@ extern int migrate_page_move_mapping(str static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t new, - free_page_t free, unsigned long private, enum migrate_mode mode, - int reason) + unsigned long private, enum migrate_mode mode, int reason, + unsigned int *nr_succeeded) { return -ENOSYS; } static inline struct page *alloc_migration_target(struct page *page, unsigned long private) diff -puN mm/compaction.c~migrate_pages-add-success-return mm/compaction.c --- a/mm/compaction.c~migrate_pages-add-success-return 2021-03-04 15:35:54.754806433 -0800 +++ b/mm/compaction.c 2021-03-04 15:35:54.815806433 -0800 @@ -2240,6 +2240,7 @@ compact_zone(struct compact_control *cc, unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; bool update_cached; + unsigned int nr_succeeded = 0; /* * These counters track activities during zone compaction. Initialize @@ -2357,7 +2358,7 @@ compact_zone(struct compact_control *cc, err = migrate_pages(&cc->migratepages, compaction_alloc, compaction_free, (unsigned long)cc, cc->mode, - MR_COMPACTION); + MR_COMPACTION, &nr_succeeded); trace_mm_compaction_migratepages(cc->nr_migratepages, err, &cc->migratepages); diff -puN mm/gup.c~migrate_pages-add-success-return mm/gup.c --- a/mm/gup.c~migrate_pages-add-success-return 2021-03-04 15:35:54.762806433 -0800 +++ b/mm/gup.c 2021-03-04 15:35:54.819806433 -0800 @@ -1552,6 +1552,7 @@ static long check_and_migrate_cma_pages( unsigned long step; bool drain_allow = true; bool migrate_allow = true; + unsigned int nr_succeeded = 0; LIST_HEAD(cma_page_list); long ret = nr_pages; struct migration_target_control mtc = { @@ -1607,7 +1608,8 @@ check_again: put_page(pages[i]); if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { + (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE, + &nr_succeeded)) { /* * some of the pages failed migration. Do get_user_pages * without migration. diff -puN mm/memory-failure.c~migrate_pages-add-success-return mm/memory-failure.c --- a/mm/memory-failure.c~migrate_pages-add-success-return 2021-03-04 15:35:54.771806433 -0800 +++ b/mm/memory-failure.c 2021-03-04 15:35:54.823806433 -0800 @@ -1799,6 +1799,7 @@ static int __soft_offline_page(struct pa unsigned long pfn = page_to_pfn(page); struct page *hpage = compound_head(page); char const *msg_page[] = {"page", "hugepage"}; + unsigned int nr_succeeded = 0; bool huge = PageHuge(page); LIST_HEAD(pagelist); struct migration_target_control mtc = { @@ -1842,7 +1843,8 @@ static int __soft_offline_page(struct pa if (isolate_page(hpage, &pagelist)) { ret = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, + &nr_succeeded); if (!ret) { bool release = !huge; diff -puN mm/memory_hotplug.c~migrate_pages-add-success-return mm/memory_hotplug.c --- a/mm/memory_hotplug.c~migrate_pages-add-success-return 2021-03-04 15:35:54.778806433 -0800 +++ b/mm/memory_hotplug.c 2021-03-04 15:35:54.826806433 -0800 @@ -1277,6 +1277,7 @@ do_migrate_range(unsigned long start_pfn unsigned long pfn; struct page *page, *head; int ret = 0; + unsigned int nr_succeeded = 0; LIST_HEAD(source); for (pfn = start_pfn; pfn < end_pfn; pfn++) { @@ -1351,7 +1352,8 @@ do_migrate_range(unsigned long start_pfn if (nodes_empty(nmask)) node_set(mtc.nid, nmask); ret = migrate_pages(&source, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); + (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, + &nr_succeeded); if (ret) { list_for_each_entry(page, &source, lru) { pr_warn("migrating pfn %lx failed ret:%d ", diff -puN mm/mempolicy.c~migrate_pages-add-success-return mm/mempolicy.c --- a/mm/mempolicy.c~migrate_pages-add-success-return 2021-03-04 15:35:54.786806433 -0800 +++ b/mm/mempolicy.c 2021-03-04 15:35:54.831806433 -0800 @@ -1071,6 +1071,7 @@ static int migrate_page_add(struct page static int migrate_to_node(struct mm_struct *mm, int source, int dest, int flags) { + unsigned int nr_succeeded = 0; nodemask_t nmask; LIST_HEAD(pagelist); int err = 0; @@ -1093,7 +1094,8 @@ static int migrate_to_node(struct mm_str if (!list_empty(&pagelist)) { err = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(&pagelist); } @@ -1268,6 +1270,7 @@ static long do_mbind(unsigned long start nodemask_t *nmask, unsigned long flags) { struct mm_struct *mm = current->mm; + unsigned int nr_succeeded = 0; struct mempolicy *new; unsigned long end; int err; @@ -1345,7 +1348,8 @@ static long do_mbind(unsigned long start if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); nr_failed = migrate_pages(&pagelist, new_page, NULL, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, + &nr_succeeded); if (nr_failed) putback_movable_pages(&pagelist); } diff -puN mm/migrate.c~migrate_pages-add-success-return mm/migrate.c --- a/mm/migrate.c~migrate_pages-add-success-return 2021-03-04 15:35:54.794806433 -0800 +++ b/mm/migrate.c 2021-03-04 15:35:54.836806433 -0800 @@ -1487,6 +1487,7 @@ static inline int try_split_thp(struct p * @mode: The migration mode that specifies the constraints for * page migration, if any. * @reason: The reason for page migration. + * @nr_succeeded: The number of pages migrated successfully. * * The function returns after 10 attempts or if no pages are movable any more * because the list has become empty or no retryable pages exist any more. @@ -1497,12 +1498,11 @@ static inline int try_split_thp(struct p */ int migrate_pages(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, - enum migrate_mode mode, int reason) + enum migrate_mode mode, int reason, unsigned int *nr_succeeded) { int retry = 1; int thp_retry = 1; int nr_failed = 0; - int nr_succeeded = 0; int nr_thp_succeeded = 0; int nr_thp_failed = 0; int nr_thp_split = 0; @@ -1605,10 +1605,10 @@ retry: case MIGRATEPAGE_SUCCESS: if (is_thp) { nr_thp_succeeded++; - nr_succeeded += nr_subpages; + *nr_succeeded += nr_subpages; break; } - nr_succeeded++; + (*nr_succeeded)++; break; default: /* @@ -1637,12 +1637,12 @@ out: */ list_splice(&ret_pages, from); - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); + count_vm_events(PGMIGRATE_SUCCESS, *nr_succeeded); count_vm_events(PGMIGRATE_FAIL, nr_failed); count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); - trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded, + trace_mm_migrate_pages(*nr_succeeded, nr_failed, nr_thp_succeeded, nr_thp_failed, nr_thp_split, mode, reason); if (!swapwrite) @@ -1710,6 +1710,7 @@ static int store_status(int __user *stat static int do_move_pages_to_node(struct mm_struct *mm, struct list_head *pagelist, int node) { + unsigned int nr_succeeded = 0; int err; struct migration_target_control mtc = { .nid = node, @@ -1717,7 +1718,8 @@ static int do_move_pages_to_node(struct }; err = migrate_pages(pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, + &nr_succeeded); if (err) putback_movable_pages(pagelist); return err; @@ -2201,6 +2203,7 @@ int migrate_misplaced_page(struct page * pg_data_t *pgdat = NODE_DATA(node); int isolated; int nr_remaining; + unsigned int nr_succeeded = 0; LIST_HEAD(migratepages); /* @@ -2224,7 +2227,7 @@ int migrate_misplaced_page(struct page * list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page, NULL, node, MIGRATE_ASYNC, - MR_NUMA_MISPLACED); + MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); diff -puN mm/page_alloc.c~migrate_pages-add-success-return mm/page_alloc.c --- a/mm/page_alloc.c~migrate_pages-add-success-return 2021-03-04 15:35:54.806806433 -0800 +++ b/mm/page_alloc.c 2021-03-04 15:35:54.842806433 -0800 @@ -8470,7 +8470,8 @@ static unsigned long pfn_max_align_up(un /* [start, end) must belong to a single zone. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + unsigned int *nr_succeeded) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -8508,7 +8509,8 @@ static int __alloc_contig_migrate_range( cc->nr_migratepages -= nr_reclaimed; ret = migrate_pages(&cc->migratepages, alloc_migration_target, - NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE); + NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE, + nr_succeeded); } if (ret < 0) { putback_movable_pages(&cc->migratepages); @@ -8544,6 +8546,7 @@ int alloc_contig_range(unsigned long sta unsigned long outer_start, outer_end; unsigned int order; int ret = 0; + unsigned int nr_succeeded = 0; struct compact_control cc = { .nr_migratepages = 0, @@ -8598,7 +8601,7 @@ int alloc_contig_range(unsigned long sta * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end); + ret = __alloc_contig_migrate_range(&cc, start, end, &nr_succeeded); if (ret && ret != -EBUSY) goto done; ret =0; From patchwork Thu Mar 4 23:59:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 473EBC433E0 for ; Fri, 5 Mar 2021 00:00:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD79B65005 for ; Fri, 5 Mar 2021 00:00:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD79B65005 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1AB8C6B000D; Thu, 4 Mar 2021 19:00:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10C786B000E; Thu, 4 Mar 2021 19:00:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7C186B0010; Thu, 4 Mar 2021 19:00:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id BDA366B000D for ; Thu, 4 Mar 2021 19:00:52 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7BCBD181AF5F4 for ; Fri, 5 Mar 2021 00:00:52 +0000 (UTC) X-FDA: 77883864744.29.4326800 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf12.hostedemail.com (Postfix) with ESMTP id 0FA33E7 for ; Fri, 5 Mar 2021 00:00:49 +0000 (UTC) IronPort-SDR: rubM28suCGeJIHahsFsjNryo5YCKp8MuG6OtjzFWLaT3g7I4pqYk4EBQh5+x4CB0DhyPCKubAu dNHWlrngs5eA== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="272534060" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="272534060" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:51 -0800 IronPort-SDR: 0D4r7Xa6qufk57kouKE2iTSQA8grjT80TWNocRWGeBURxH4322cVEssnZOGvSzFF7zhqcYv5fn 56/VQVZooDYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="445947537" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga001.jf.intel.com with ESMTP; 04 Mar 2021 16:00:50 -0800 Subject: [PATCH 05/10] mm/migrate: demote pages during reclaim To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 15:59:58 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210304235958.ECFA81E5@viggo.jf.intel.com> X-Stat-Signature: gts6314enninrb8iuf13ps44khzeih9x X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 0FA33E7 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902449-178984 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen This is mostly derived from a patch from Yang Shi: https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ Add code to the reclaim path (shrink_page_list()) to "demote" data to another NUMA node instead of discarding the data. This always avoids the cost of I/O needed to read the page back in and sometimes avoids the writeout cost when the pagee is dirty. A second pass through shrink_page_list() will be made if any demotions fail. This essentally falls back to normal reclaim behavior in the case that demotions fail. Previous versions of this patch may have simply failed to reclaim pages which were eligible for demotion but were unable to be demoted in practice. Note: This just adds the start of infratructure for migration. It is actually disabled next to the FIXME in migrate_demote_page_ok(). Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: osalvador --- changes from 20210122: * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE (Ying) changes from 202010: * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define * make migrate_demote_page_ok() static, remove 'sc' arg until later patch * remove unnecessary alloc_demote_page() hugetlb warning * Simplify alloc_demote_page() gfp mask. Depend on __GFP_NORETRY to make it lightweight instead of fancier stuff like leaving out __GFP_IO/FS. * Allocate migration page with alloc_migration_target() instead of allocating directly. changes from 20200730: * Add another pass through shrink_page_list() when demotion fails. --- b/include/linux/migrate.h | 13 +++++- b/include/trace/events/migrate.h | 3 - b/mm/vmscan.c | 81 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 94 insertions(+), 3 deletions(-) diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h --- a/include/linux/migrate.h~demote-with-migrate_pages 2021-03-04 15:35:56.471806429 -0800 +++ b/include/linux/migrate.h 2021-03-04 15:35:56.479806429 -0800 @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_DEMOTION, MR_TYPES }; @@ -58,8 +59,8 @@ extern int migrate_page_move_mapping(str static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t new, - unsigned long private, enum migrate_mode mode, int reason, - unsigned int *nr_succeeded) + free_page_t free, unsigned long private, enum migrate_mode mode, + int reason, unsigned int *nr_succeeded) { return -ENOSYS; } static inline struct page *alloc_migration_target(struct page *page, unsigned long private) @@ -196,6 +197,14 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int next_demotion_node(int node); + +#else /* CONFIG_MIGRATION disabled: */ + +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} #endif /* CONFIG_MIGRATION */ diff -puN include/trace/events/migrate.h~demote-with-migrate_pages include/trace/events/migrate.h --- a/include/trace/events/migrate.h~demote-with-migrate_pages 2021-03-04 15:35:56.473806429 -0800 +++ b/include/trace/events/migrate.h 2021-03-04 15:35:56.479806429 -0800 @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_DEMOTION, "demotion") /* * First define the enums in the above macros to be exported to userspace diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c --- a/mm/vmscan.c~demote-with-migrate_pages 2021-03-04 15:35:56.475806429 -0800 +++ b/mm/vmscan.c 2021-03-04 15:35:56.482806429 -0800 @@ -41,6 +41,7 @@ #include #include #include +#include #include #include #include @@ -1034,6 +1035,23 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } +static bool migrate_demote_page_ok(struct page *page) +{ + int next_nid = next_demotion_node(page_to_nid(page)); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageHuge(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (next_nid == NUMA_NO_NODE) + return false; + if (PageTransHuge(page) && !thp_migration_supported()) + return false; + + // FIXME: actually enable this later in the series + return false; +} + /* Check if a page is dirty or under writeback */ static void page_check_dirty_writeback(struct page *page, bool *dirty, bool *writeback) @@ -1064,6 +1082,45 @@ static void page_check_dirty_writeback(s mapping->a_ops->is_dirty_writeback(page, dirty, writeback); } +static struct page *alloc_demote_page(struct page *page, unsigned long node) +{ + struct migration_target_control mtc = { + /* + * Fail the allocation quickly and quietly. When this + * happens, 'page; will likely just be discarded instead + * of migrated. + */ + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_NORETRY | __GFP_NOWARN, + .nid = node + }; + + return alloc_migration_target(page, (unsigned long)&mtc); +} + +/* + * Take pages on @demote_list and attempt to demote them to + * another node. Pages which are not demoted are left on + * @demote_pages. + */ +static unsigned int demote_page_list(struct list_head *demote_pages, + struct pglist_data *pgdat, + struct scan_control *sc) +{ + int target_nid = next_demotion_node(pgdat->node_id); + unsigned int nr_succeeded = 0; + int err; + + if (list_empty(demote_pages)) + return 0; + + /* Demotion ignores all cpuset and mempolicy settings */ + err = migrate_pages(demote_pages, alloc_demote_page, NULL, + target_nid, MIGRATE_ASYNC, MR_DEMOTION, + &nr_succeeded); + + return nr_succeeded; +} + /* * shrink_page_list() returns the number of reclaimed pages */ @@ -1075,12 +1132,15 @@ static unsigned int shrink_page_list(str { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); + LIST_HEAD(demote_pages); unsigned int nr_reclaimed = 0; unsigned int pgactivate = 0; + bool do_demote_pass = true; memset(stat, 0, sizeof(*stat)); cond_resched(); +retry: while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; @@ -1230,6 +1290,16 @@ static unsigned int shrink_page_list(str } /* + * Before reclaiming the page, try to relocate + * its contents to another node. + */ + if (do_demote_pass && migrate_demote_page_ok(page)) { + list_add(&page->lru, &demote_pages); + unlock_page(page); + continue; + } + + /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. * Lazyfree page could be freed directly @@ -1479,6 +1549,17 @@ keep: list_add(&page->lru, &ret_pages); VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + /* 'page_list' is always empty here */ + + /* Migrate pages selected for demotion */ + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); + /* Pages that could not be demoted are still in @demote_pages */ + if (!list_empty(&demote_pages)) { + /* Pages which failed to demoted go back on @page_list for retry: */ + list_splice_init(&demote_pages, page_list); + do_demote_pass = false; + goto retry; + } pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; From patchwork Fri Mar 5 00:00:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EFF9C433E6 for ; Fri, 5 Mar 2021 00:01:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48E0564FFF for ; Fri, 5 Mar 2021 00:01:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48E0564FFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DB7AF6B0010; Thu, 4 Mar 2021 19:01:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D41636B0012; Thu, 4 Mar 2021 19:01:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE36B6B0022; Thu, 4 Mar 2021 19:01:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 9B39D6B0010 for ; Thu, 4 Mar 2021 19:01:04 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 639A01EF2 for ; Fri, 5 Mar 2021 00:01:04 +0000 (UTC) X-FDA: 77883865248.23.3D4AA66 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf28.hostedemail.com (Postfix) with ESMTP id 0539B2015DDF for ; Fri, 5 Mar 2021 00:01:03 +0000 (UTC) IronPort-SDR: ngLJDCqQiqH3jP0FjSktwEwxbW17/dgHxobf//Oe0YUSG7wVPq6xkS2sQMEUgjp80rDVqAlr5H 8CmPqJDfEaxQ== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="187639302" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="187639302" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:01:02 -0800 IronPort-SDR: u6tiM7EChXKnap2WU9nMY6+rO96oNcpw0iz5J5zR071El4CGIbdwlA9B6sNg7rODVYeXV3RJ+l pwvJrrL7vYMw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,224,1610438400"; d="scan'208";a="436042233" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by FMSMGA003.fm.intel.com with ESMTP; 04 Mar 2021 16:00:52 -0800 Subject: [PATCH 06/10] mm/vmscan: add page demotion counter To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 16:00:00 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210305000000.48BA4A97@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0539B2015DDF X-Stat-Signature: awyhjxxfh1npn1hc6xzjpidatr7ctou8 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mga09.intel.com; client-ip=134.134.136.24 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902463-730168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Account the number of demoted pages into reclaim_state->nr_demoted. Add pgdemote_kswapd and pgdemote_direct VM counters showed in /proc/vmstat. [ daveh: - __count_vm_events() a bit, and made them look at the THP size directly rather than getting data from migrate_pages() ] Signed-off-by: Yang Shi Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi --- Changes since 202010: * remove unused scan-control 'demoted' field --- b/include/linux/vm_event_item.h | 2 ++ b/mm/vmscan.c | 5 +++++ b/mm/vmstat.c | 2 ++ 3 files changed, 9 insertions(+) diff -puN include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter 2021-03-04 15:35:57.698806425 -0800 +++ b/include/linux/vm_event_item.h 2021-03-04 15:35:57.719806425 -0800 @@ -33,6 +33,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS PGREUSE, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGDEMOTE_KSWAPD, + PGDEMOTE_DIRECT, PGSCAN_KSWAPD, PGSCAN_DIRECT, PGSCAN_DIRECT_THROTTLE, diff -puN mm/vmscan.c~mm-vmscan-add-page-demotion-counter mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-add-page-demotion-counter 2021-03-04 15:35:57.700806425 -0800 +++ b/mm/vmscan.c 2021-03-04 15:35:57.724806425 -0800 @@ -1118,6 +1118,11 @@ static unsigned int demote_page_list(str target_nid, MIGRATE_ASYNC, MR_DEMOTION, &nr_succeeded); + if (current_is_kswapd()) + __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded); + else + __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded); + return nr_succeeded; } diff -puN mm/vmstat.c~mm-vmscan-add-page-demotion-counter mm/vmstat.c --- a/mm/vmstat.c~mm-vmscan-add-page-demotion-counter 2021-03-04 15:35:57.708806425 -0800 +++ b/mm/vmstat.c 2021-03-04 15:35:57.726806425 -0800 @@ -1244,6 +1244,8 @@ const char * const vmstat_text[] = { "pgreuse", "pgsteal_kswapd", "pgsteal_direct", + "pgdemote_kswapd", + "pgdemote_direct", "pgscan_kswapd", "pgscan_direct", "pgscan_direct_throttle", From patchwork Fri Mar 5 00:00:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FC0C433DB for ; Fri, 5 Mar 2021 00:01:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78D3364FFF for ; Fri, 5 Mar 2021 00:01:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78D3364FFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A2206B000E; Thu, 4 Mar 2021 19:00:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 52C526B0010; Thu, 4 Mar 2021 19:00:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A6736B0012; Thu, 4 Mar 2021 19:00:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 13C316B000E for ; Thu, 4 Mar 2021 19:00:58 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B96C6180AD802 for ; Fri, 5 Mar 2021 00:00:57 +0000 (UTC) X-FDA: 77883864954.16.D0CC63C Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf22.hostedemail.com (Postfix) with ESMTP id E18E9C001C6D for ; Fri, 5 Mar 2021 00:00:55 +0000 (UTC) IronPort-SDR: U8z4M3HU4PWKEvR2+Y0e3/CWRGvT7rIZdq1kDKAYZpl7cHzc2j/XNEbi0tVZomjS6Ib3tJpGe7 tTl2F7D3gw9A== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="207253932" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="207253932" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:55 -0800 IronPort-SDR: IIrHKLv/Wbw781jrLXP6GGfcAJlfl0o9ZnUffbWw0Ke+Yb4AZjJaCp5TBcrfIVW05FRDUp2ct8 QOsGwLB1Skwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="428851624" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga004.fm.intel.com with ESMTP; 04 Mar 2021 16:00:54 -0800 Subject: [PATCH 07/10] mm/vmscan: add helper for querying ability to age anonymous pages To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 16:00:02 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210305000002.D171810D@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E18E9C001C6D X-Stat-Signature: 4tncm5yncp6pyidhk36x7gfrkixqgs76 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=mga01.intel.com; client-ip=192.55.52.88 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902455-252289 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Anonymous pages are kept on their own LRU(s). These lists could theoretically always be scanned and maintained. But, without swap, there is currently nothing the kernel can *do* with the results of a scanned, sorted LRU for anonymous pages. A check for '!total_swap_pages' currently serves as a valid check as to whether anonymous LRUs should be maintained. However, another method will be added shortly: page demotion. Abstract out the 'total_swap_pages' checks into a helper, give it a logically significant name, and check for the possibility of page demotion. Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi Reviewed-by: Greg Thelen --- b/mm/vmscan.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff -puN mm/vmscan.c~mm-vmscan-anon-can-be-aged mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-anon-can-be-aged 2021-03-04 15:35:58.935806422 -0800 +++ b/mm/vmscan.c 2021-03-04 15:35:58.942806422 -0800 @@ -2517,6 +2517,26 @@ out: } } +/* + * Anonymous LRU management is a waste if there is + * ultimately no way to reclaim the memory. + */ +bool anon_should_be_aged(struct lruvec *lruvec) +{ + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + /* Aging the anon LRU is valuable if swap is present: */ + if (total_swap_pages > 0) + return true; + + /* Also valuable if anon pages can be demoted: */ + if (next_demotion_node(pgdat->node_id) >= 0) + return true; + + /* No way to reclaim anon pages. Should not age anon LRUs: */ + return false; +} + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2626,7 +2646,8 @@ static void shrink_lruvec(struct lruvec * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (anon_should_be_aged(lruvec) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -3455,10 +3476,11 @@ static void age_active_anon(struct pglis struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + + if (!anon_should_be_aged(lruvec)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; From patchwork Fri Mar 5 00:00:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14B8FC433E0 for ; Fri, 5 Mar 2021 00:01:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B099A64FFD for ; Fri, 5 Mar 2021 00:01:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B099A64FFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5060B6B0023; Thu, 4 Mar 2021 19:01:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 466756B0025; Thu, 4 Mar 2021 19:01:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 306AE6B0024; Thu, 4 Mar 2021 19:01:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 1108A6B0022 for ; Thu, 4 Mar 2021 19:01:16 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C2B4C18014C93 for ; Fri, 5 Mar 2021 00:01:15 +0000 (UTC) X-FDA: 77883865710.05.972DDDB Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf28.hostedemail.com (Postfix) with ESMTP id 51C1920003BC for ; Fri, 5 Mar 2021 00:01:14 +0000 (UTC) IronPort-SDR: 3gA/8Cx/tEoQmdX2uis32+P1Caujt1ZmRlql14hQYM++8srbrLUgQyp0LgAPIrtqVUab1WlMov zTju8IwFg8Og== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="174646722" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="174646722" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:00:56 -0800 IronPort-SDR: ZF7TP6u0e8Rj9aS3WwuoWDC8vt2zn91PkvW9spyEVeVPGNZnBVRI0jeznH/SUH5m8C4dVfUYDP CZL/9mwaCdPg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="407034832" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga007.jf.intel.com with ESMTP; 04 Mar 2021 16:00:56 -0800 Subject: [PATCH 08/10] mm/vmscan: Consider anonymous pages without swap To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,kbusch@kernel.org,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 16:00:04 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210305000004.20A8D23F@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 51C1920003BC X-Stat-Signature: rn3ui9t4mpkzmnju47ekfxf9fayoje9s Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mga02.intel.com; client-ip=134.134.136.20 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902474-914644 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Keith Busch Reclaim anonymous pages if a migration path is available now that demotion provides a non-swap recourse for reclaiming anon pages. Note that this check is subtly different from the anon_should_be_aged() checks. This mechanism checks whether a specific page in a specific context *can* actually be reclaimed, given current swap space and cgroup limits anon_should_be_aged() is a much simpler and more prelimiary check which just says whether there is a possibility of future reclaim. #Signed-off-by: Keith Busch Cc: Keith Busch Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi --- Changes from Dave 10/2020: * remove 'total_swap_pages' modification Changes from Dave 06/2020: * rename reclaim_anon_pages()->can_reclaim_anon_pages() Note: Keith's Intel SoB is commented out because he is no longer at Intel and his @intel.com mail will bounce. --- b/mm/vmscan.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c --- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2021-03-04 15:35:59.994806420 -0800 +++ b/mm/vmscan.c 2021-03-04 15:36:00.001806420 -0800 @@ -287,6 +287,34 @@ static bool writeback_throttling_sane(st } #endif +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, + int node_id) +{ + if (memcg == NULL) { + /* + * For non-memcg reclaim, is there + * space in any swap device? + */ + if (get_nr_swap_pages() > 0) + return true; + } else { + /* Is the memcg below its swap limit? */ + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) + return true; + } + + /* + * The page can not be swapped. + * + * Can it be reclaimed from this node via demotion? + */ + if (next_demotion_node(node_id) >= 0) + return true; + + /* No way to reclaim anon pages */ + return false; +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -298,7 +326,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -2332,6 +2360,7 @@ enum scan_balance { static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long *nr) { + struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct mem_cgroup *memcg = lruvec_memcg(lruvec); unsigned long anon_cost, file_cost, total_cost; int swappiness = mem_cgroup_swappiness(memcg); @@ -2342,7 +2371,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { scan_balance = SCAN_FILE; goto out; } @@ -2717,7 +2746,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, pgdat->node_id)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Fri Mar 5 00:00:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F21C433DB for ; Fri, 5 Mar 2021 00:02:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69AD964FFD for ; Fri, 5 Mar 2021 00:02:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69AD964FFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 021B96B0006; Thu, 4 Mar 2021 19:02:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F3A786B0008; Thu, 4 Mar 2021 19:02:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E03956B0022; Thu, 4 Mar 2021 19:02:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id C0B7E6B0006 for ; Thu, 4 Mar 2021 19:02:02 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 891B4181AF5F4 for ; Fri, 5 Mar 2021 00:02:02 +0000 (UTC) X-FDA: 77883867684.29.FEC4DB1 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf25.hostedemail.com (Postfix) with ESMTP id 77BDA601D8C5 for ; Fri, 5 Mar 2021 00:01:00 +0000 (UTC) IronPort-SDR: ZpsXzKfLtlp0baOAqKTWT+p2KTaj0y5HdofvQVfAhKrnnhyk8Vme2Dj0fbcc7fVrkaLvF07xSA TaIT7tCEv37Q== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="272534084" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="272534084" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:01:00 -0800 IronPort-SDR: yTbDByQ2rGrIfF+5xRy6r+oAFaKJf1DU4mX78czkUYSbKHars3saxNO4Rxi42s+3N7KzKqowWH rjAbNazBt78Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="374728381" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga007.fm.intel.com with ESMTP; 04 Mar 2021 16:00:59 -0800 Subject: [PATCH 09/10] mm/vmscan: never demote for memcg reclaim To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 16:00:06 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210305000006.3799F4BE@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 77BDA601D8C5 X-Stat-Signature: r46h6eaj8m7dpab1btkzb4cat7khkk5d Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902460-172750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Global reclaim aims to reduce the amount of memory used on a given node or set of nodes. Migrating pages to another node serves this purpose. memcg reclaim is different. Its goal is to reduce the total memory consumption of the entire memcg, across all nodes. Migration does not assist memcg reclaim because it just moves page contents between nodes rather than actually reducing memory consumption. Signed-off-by: Dave Hansen Suggested-by: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador Reviewed-by: Yang Shi --- b/mm/vmscan.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff -puN mm/vmscan.c~never-demote-for-memcg-reclaim mm/vmscan.c --- a/mm/vmscan.c~never-demote-for-memcg-reclaim 2021-03-04 15:36:01.067806417 -0800 +++ b/mm/vmscan.c 2021-03-04 15:36:01.072806417 -0800 @@ -288,7 +288,8 @@ static bool writeback_throttling_sane(st #endif static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, - int node_id) + int node_id, + struct scan_control *sc) { if (memcg == NULL) { /* @@ -326,7 +327,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL)) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -1063,7 +1064,8 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } -static bool migrate_demote_page_ok(struct page *page) +static bool migrate_demote_page_ok(struct page *page, + struct scan_control *sc) { int next_nid = next_demotion_node(page_to_nid(page)); @@ -1071,6 +1073,10 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + /* It is pointless to do demotion in memcg reclaim */ + if (cgroup_reclaim(sc)) + return false; + if (next_nid == NUMA_NO_NODE) return false; if (PageTransHuge(page) && !thp_migration_supported()) @@ -1326,7 +1332,7 @@ retry: * Before reclaiming the page, try to relocate * its contents to another node. */ - if (do_demote_pass && migrate_demote_page_ok(page)) { + if (do_demote_pass && migrate_demote_page_ok(page, sc)) { list_add(&page->lru, &demote_pages); unlock_page(page); continue; @@ -2371,7 +2377,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) { scan_balance = SCAN_FILE; goto out; } @@ -2746,7 +2752,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, pgdat->node_id)) + if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; From patchwork Fri Mar 5 00:00:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12117165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABD6EC433E9 for ; Fri, 5 Mar 2021 00:01:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3F4926500E for ; Fri, 5 Mar 2021 00:01:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3F4926500E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 554736B0012; Thu, 4 Mar 2021 19:01:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B4EE6B0022; Thu, 4 Mar 2021 19:01:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E3A76B0023; Thu, 4 Mar 2021 19:01:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 08C1C6B0012 for ; Thu, 4 Mar 2021 19:01:06 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ACDBD34A3 for ; Fri, 5 Mar 2021 00:01:05 +0000 (UTC) X-FDA: 77883865290.16.5063803 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf09.hostedemail.com (Postfix) with ESMTP id C611F601D8D2 for ; Fri, 5 Mar 2021 00:01:03 +0000 (UTC) IronPort-SDR: yIcPSZQMitkj8hTaXyIKoRk9ZHdRDrQqPJmR2/pmfvaPRzO4QbU/pTtJEkVJdoxorunfgmmm9l mYE92KiLx4jA== X-IronPort-AV: E=McAfee;i="6000,8403,9913"; a="251569480" X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="251569480" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2021 16:01:02 -0800 IronPort-SDR: nq7xLGNZ2pm3sGJ9MDl/08M9HnyT+UMnxoC6i8yb6RxZuJdJYJxYOGGLxrhheWRWrtLVU8G4s4 6zSyjkG5W0FQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,223,1610438400"; d="scan'208";a="408039901" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga008.jf.intel.com with ESMTP; 04 Mar 2021 16:01:02 -0800 Subject: [PATCH 10/10] mm/migrate: new zone_reclaim_mode to enable reclaim migration To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen ,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 04 Mar 2021 16:00:09 -0800 References: <20210304235949.7922C1C3@viggo.jf.intel.com> In-Reply-To: <20210304235949.7922C1C3@viggo.jf.intel.com> Message-Id: <20210305000009.EDF902E9@viggo.jf.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C611F601D8D2 X-Stat-Signature: s7dx7k453ys9h9eghc6w5h188h57j79f Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mga07.intel.com; client-ip=134.134.136.100 X-HE-DKIM-Result: none/none X-HE-Tag: 1614902463-598371 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Some method is obviously needed to enable reclaim-based migration. Just like traditional autonuma, there will be some workloads that will benefit like workloads with more "static" configurations where hot pages stay hot and cold pages stay cold. If pages come and go from the hot and cold sets, the benefits of this approach will be more limited. The benefits are truly workload-based and *not* hardware-based. We do not believe that there is a viable threshold where certain hardware configurations should have this mechanism enabled while others do not. To be conservative, earlier work defaulted to disable reclaim- based migration and did not include a mechanism to enable it. This proposes extending the existing "zone_reclaim_mode" (now now really node_reclaim_mode) as a method to enable it. We are open to any alternative that allows end users to enable this mechanism or disable it it workload harm is detected (just like traditional autonuma). Once this is enabled page demotion may move data to a NUMA node that does not fall into the cpuset of the allocating process. This could be construed to violate the guarantees of cpusets. However, since this is an opt-in mechanism, the assumption is that anyone enabling it is content to relax the guarantees. Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador changes since 20200122: * Changelog material about relaxing cpuset constraints --- b/Documentation/admin-guide/sysctl/vm.rst | 9 +++++++++ b/include/linux/swap.h | 3 ++- b/include/uapi/linux/mempolicy.h | 1 + b/mm/vmscan.c | 6 ++++-- 4 files changed, 16 insertions(+), 3 deletions(-) diff -puN Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE Documentation/admin-guide/sysctl/vm.rst --- a/Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE 2021-03-04 15:36:26.078806355 -0800 +++ b/Documentation/admin-guide/sysctl/vm.rst 2021-03-04 15:36:26.093806355 -0800 @@ -976,6 +976,7 @@ This is value OR'ed together of 1 Zone reclaim on 2 Zone reclaim writes dirty pages out 4 Zone reclaim swaps pages +8 Zone reclaim migrates pages = =================================== zone_reclaim_mode is disabled by default. For file servers or workloads @@ -1000,3 +1001,11 @@ of other processes running on other node Allowing regular swap effectively restricts allocations to the local node unless explicitly overridden by memory policies or cpuset configurations. + +Page migration during reclaim is intended for systems with tiered memory +configurations. These systems have multiple types of memory with varied +performance characteristics instead of plain NUMA systems where the same +kind of memory is found at varied distances. Allowing page migration +during reclaim enables these systems to migrate pages from fast tiers to +slow tiers when the fast tier is under pressure. This migration is +performed before swap. diff -puN include/linux/swap.h~RECLAIM_MIGRATE include/linux/swap.h --- a/include/linux/swap.h~RECLAIM_MIGRATE 2021-03-04 15:36:26.082806355 -0800 +++ b/include/linux/swap.h 2021-03-04 15:36:26.093806355 -0800 @@ -382,7 +382,8 @@ extern int sysctl_min_slab_ratio; static inline bool node_reclaim_enabled(void) { /* Is any node_reclaim_mode bit set? */ - return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); + return node_reclaim_mode & (RECLAIM_ZONE |RECLAIM_WRITE| + RECLAIM_UNMAP|RECLAIM_MIGRATE); } extern void check_move_unevictable_pages(struct pagevec *pvec); diff -puN include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE include/uapi/linux/mempolicy.h --- a/include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE 2021-03-04 15:36:26.084806355 -0800 +++ b/include/uapi/linux/mempolicy.h 2021-03-04 15:36:26.094806355 -0800 @@ -69,5 +69,6 @@ enum { #define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ #define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ +#define RECLAIM_MIGRATE (1<<3) /* Migrate to other nodes during reclaim */ #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff -puN mm/vmscan.c~RECLAIM_MIGRATE mm/vmscan.c --- a/mm/vmscan.c~RECLAIM_MIGRATE 2021-03-04 15:36:26.087806355 -0800 +++ b/mm/vmscan.c 2021-03-04 15:36:26.096806355 -0800 @@ -1073,6 +1073,9 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + if (!(node_reclaim_mode & RECLAIM_MIGRATE)) + return false; + /* It is pointless to do demotion in memcg reclaim */ if (cgroup_reclaim(sc)) return false; @@ -1082,8 +1085,7 @@ static bool migrate_demote_page_ok(struc if (PageTransHuge(page) && !thp_migration_supported()) return false; - // FIXME: actually enable this later in the series - return false; + return true; } /* Check if a page is dirty or under writeback */