From patchwork Mon Feb 19 06:04:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13562186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 097F7C5475B for ; Mon, 19 Feb 2024 06:04:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56C038D0005; Mon, 19 Feb 2024 01:04:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51CDA8D0003; Mon, 19 Feb 2024 01:04:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 124918D0005; Mon, 19 Feb 2024 01:04:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EC22E8D0006 for ; Mon, 19 Feb 2024 01:04:29 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C35841201D7 for ; Mon, 19 Feb 2024 06:04:29 +0000 (UTC) X-FDA: 81807513858.20.EECE552 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf05.hostedemail.com (Postfix) with ESMTP id DDEFD100008 for ; Mon, 19 Feb 2024 06:04:27 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708322668; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=z8N3K7cEzP29wp8OmV1x4SKhsG69RUNKzzZ3XrSlbEU=; b=cDd2MseRTLyC9jNvJ/Xx6nPeHwgQ59D4NOXREsXsYMsSU21M+1ZkPG4Em0qFH8VL8tWSQp C9Sfm53JYaHdICDIrlWQdj9X45T3Ruy+y2AtYMuZEPVn5e0/kiFuo+Nak2PRk6Q8BeB6wX ZF95DeoaZt7rjWnUQMne9xUqdnk79vg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708322668; a=rsa-sha256; cv=none; b=iJzjDC+1DnSz0ZZB9Z3qh1HRKUqhjfA60NFk9gQ7RaljZF+a29cdltNAu75P4Kml5hCr3y yofAXxYVVN/leR9fd3fWrp02QLAnI7skE40iRGL1CVuFfZY29XlzETBak0q2/fz/QBsliV CHMo8w85vqpgmSnH0Z/l5K+fWIQon4k= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-d6dff70000001748-e1-65d2ef629e08 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, namit@vmware.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v8 8/8] mm: Pause migrc mechanism at high memory pressure Date: Mon, 19 Feb 2024 15:04:07 +0900 Message-Id: <20240219060407.25254-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240219060407.25254-1-byungchul@sk.com> References: <20240219060407.25254-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRmVeSWpSXmKPExsXC9ZZnkW7S+0upBlOOc1rMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8X1XQ8ZLY73HmCy mH/vM5vF5k1TmS2OT5nKaPH7B1DHyVmTWRwEPb639rF47Jx1l91jwaZSj80rtDwW73nJ5LFp VSebx6ZPk9g93p07x+5xYsZvFo95JwM93u+7yuax9ZedR+PUa2wenzfJebyb/5YtgD+KyyYl NSezLLVI3y6BK6Or7T9bwVSVit+brBoY98t2MXJySAiYSLx6e5Adxn45exeYzSagLnHjxk9m EFtEwEziYOsfoDgXB7PARyaJ1d87WEASwgKeEnNn3wBKcHCwCKhKtPRbgIR5BUwlOjZfZ4KY KS+xesMBZpASTqA5uw8KgYSFgEpOXJ3MBDJSQuA/m8ShN7uYIeolJQ6uuMEygZF3ASPDKkah zLyy3MTMHBO9jMq8zAq95PzcTYzAWFhW+yd6B+OnC8GHGAU4GJV4eDNELqUKsSaWFVfmHmKU 4GBWEuF1b7qQKsSbklhZlVqUH19UmpNafIhRmoNFSZzX6Ft5ipBAemJJanZqakFqEUyWiYNT qoFx4R97/cSKU/zXXhgsmuN7Xom/Jcj/xTMZlzLvnVYuR8rE9T88kfVIS1HIuxrjH/z6AK9X s7DOphfLU70CGTXPaU4+WeeeFzX7++1ZLge4Ll6NNDm9xPAQt75V+zk+N+E6jWPHBJsqWH9Z SU7VvOigPPnnrf2RlZ9V3q2Z2HZZhOvW0xp3RXElluKMREMt5qLiRACIIbCQgQIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRmVeSWpSXmKPExsXC5WfdrJv4/lKqwcVmU4s569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfVdDxkt jvceYLKYf+8zm8XmTVOZLY5Pmcpo8fsHUMfJWZNZHIQ8vrf2sXjsnHWX3WPBplKPzSu0PBbv ecnksWlVJ5vHpk+T2D3enTvH7nFixm8Wj3knAz3e77vK5rH4xQcmj62/7Dwap15j8/i8Sc7j 3fy3bAECUVw2Kak5mWWpRfp2CVwZXW3/2QqmqlT83mTVwLhftouRk0NCwETi5exd7CA2m4C6 xI0bP5lBbBEBM4mDrX+A4lwczAIfmSRWf+9gAUkIC3hKzJ19AyjBwcEioCrR0m8BEuYVMJXo 2HydCWKmvMTqDQeYQUo4gebsPigEEhYCKjlxdTLTBEauBYwMqxhFMvPKchMzc0z1irMzKvMy K/SS83M3MQIDe1ntn4k7GL9cdj/EKMDBqMTDmyFyKVWINbGsuDL3EKMEB7OSCK9704VUId6U xMqq1KL8+KLSnNTiQ4zSHCxK4rxe4akJQgLpiSWp2ampBalFMFkmDk6pBsa+qN/cuR+ZJxic q9auf2XEox5edC9pU3qJ7r87F+Lf/zBlkdu5rNZgSn74i1tHZ6ekWUv4Cd+zOuTPNvGstnnv ZnNGq5yIbCONxTrJLKW9SVtqfAN2rAted3qPQ0GKXU0Be7GD92GJfLPzt105PjzdrfWaXe1L QdvCPX9TVBYLHv+ntOKlthJLcUaioRZzUXEiALbAftxoAgAA X-CFilter-Loop: Reflected X-Stat-Signature: 3xinhcozxwf9rbkhbrjb8ea6g7it19jt X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: DDEFD100008 X-Rspam-User: X-HE-Tag: 1708322667-332064 X-HE-Meta: U2FsdGVkX1/zmtN1QHjCzMkeTmn3zXN0yLnQn0fEQsndXTiqQemX7fmlIi7FFoEwRmj1AuyiUAAthi88sbi6tku1OOguAuuGOxfhD4fZqTPqweix3/1wzKiKwXM66Umq/w0UfkiP+3htC/tzebBZ+lXqe4kAbmX7x9tSKO8OTB8RtRASS2zBBQxd2qJswrw18/NsvlAqTE79jZeVBF/JXNXXH3CXD4PPsDGI0QZ/k237p9dSZmeNyn5/pOZyha23xMB2CcuGbX1CyeYmIXWkRPA/o6nwlRxmbfqE5BqJ9nBuoU9yKttdKV9GZMpYslaamYSfKY4XZ44+oRTvHUxvw6BcOm7/1Iotg1yHS+0lqPMM0kKw+IXYPZppb6cWp2Hpk9kz/2SjDK5VGmGlDnIMP93O3447oNh/MytN5bfeCuvl0f6EiuKdRXeqX9af1vcMUFWou+sl3LBh//UY5HUebPDjGqsOPXIXlaltt3hfMnHDky2LU1GcBjlMIHBMQ75SPYxI/gPL8PNYn0FiqMXqsAWehwGvQd6iqWTjP/bVW/1J7+1ZJZEyi7+Pn9Vcmx6MNdrENVwIzZ1YTX75vnx48PWZKjge86SOqUJoeY19je/gJf4oZJ21UQZDto6FR/DmBNVn2/PYfa+yeehO2EnObBwZeOAMT9m27s+OSTGmynezK/w5Iyah8gHV+abCz6VlDJ15668kzzbjSTyHE8z8deJ1AQLSOijzi+1GZimwwguYrjKCP9dtpru6uSo1PlobJ1vGNwF44PA/CyvSqdL1MNW2SVm5m4HnS0bKQ8TajKwprhcfOFvMOjGLOJ5BU1LyyYksuqIEnCHvUx0N1nTit+vgVwalErJMbbHGBMWQixN2z0jIKdGIhK8xiIyXixeqR5RdBpqpX+SWbfu6GoadUE9P0Li2EcXhJWgZ+a9fx6RReZJ4BtZLI27YX9iLqAtSQcQsDlwDouCD7v+YvWJ u1Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Regression was observed when the system is in high memory pressure with swap on, where migrc might keep a number of folios in its pending queue, which possibly makes it worse. So temporarily prevented migrc from working on that condition. Signed-off-by: Byungchul Park --- mm/internal.h | 20 ++++++++++++++++++++ mm/migrate.c | 18 +++++++++++++++++- mm/page_alloc.c | 13 +++++++++++++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index ab02cb8306e2..55781f879fb2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1285,6 +1285,8 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry, #endif /* CONFIG_SHRINKER_DEBUG */ #if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) +extern atomic_t migrc_pause_cnt; + /* * Reset the indicator indicating there are no writable mappings at the * beginning of every rmap traverse for unmap. Migrc can work only when @@ -1313,6 +1315,21 @@ static inline bool can_migrc_test(void) return current->can_migrc && current->tlb_ubc_ro.flush_required; } +static inline void migrc_pause(void) +{ + atomic_inc(&migrc_pause_cnt); +} + +static inline void migrc_resume(void) +{ + atomic_dec(&migrc_pause_cnt); +} + +static inline bool migrc_paused(void) +{ + return !!atomic_read(&migrc_pause_cnt); +} + /* * Return the number of folios pending TLB flush that have yet to get * freed in the zone. @@ -1332,6 +1349,9 @@ void migrc_flush_end(struct tlbflush_unmap_batch *batch); static inline void can_migrc_init(void) {} static inline void can_migrc_fail(void) {} static inline bool can_migrc_test(void) { return false; } +static inline void migrc_pause(void) {} +static inline void migrc_resume(void) {} +static inline bool migrc_paused(void) { return false; } static inline int migrc_pending_nr_in_zone(struct zone *z) { return 0; } static inline bool migrc_flush_free_folios(void) { return false; } static inline void migrc_flush_start(void) {} diff --git a/mm/migrate.c b/mm/migrate.c index cbe5372f159e..fbc8586ed735 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -62,6 +62,12 @@ static struct tlbflush_unmap_batch migrc_ubc; static LIST_HEAD(migrc_folios); static DEFINE_SPINLOCK(migrc_lock); +/* + * Increase on entry of handling high memory pressure e.g. direct + * reclaim, decrease on the exit. See __alloc_pages_slowpath(). + */ +atomic_t migrc_pause_cnt = ATOMIC_INIT(0); + static void init_tlb_ubc(struct tlbflush_unmap_batch *ubc) { arch_tlbbatch_clear(&ubc->arch); @@ -1922,7 +1928,8 @@ static int migrate_pages_batch(struct list_head *from, */ init_tlb_ubc(&pending_ubc); do_migrc = IS_ENABLED(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH) && - (reason == MR_DEMOTION || reason == MR_NUMA_MISPLACED); + (reason == MR_DEMOTION || reason == MR_NUMA_MISPLACED) && + !migrc_paused(); for (pass = 0; pass < nr_pass && retry; pass++) { retry = 0; @@ -1961,6 +1968,15 @@ static int migrate_pages_batch(struct list_head *from, continue; } + /* + * In case that the system is in high memory + * pressure, give up migrc mechanism this turn. + */ + if (unlikely(do_migrc && migrc_paused())) { + fold_ubc(tlb_ubc, &pending_ubc); + do_migrc = false; + } + can_migrc_init(); rc = migrate_folio_unmap(get_new_folio, put_new_folio, private, folio, &dst, mode, reason, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6ef0c22b1109..366777afce7f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4072,6 +4072,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, unsigned int cpuset_mems_cookie; unsigned int zonelist_iter_cookie; int reserve_flags; + bool migrc_paused = false; restart: compaction_retries = 0; @@ -4203,6 +4204,16 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, if (page) goto got_pg; + /* + * The system is in very high memory pressure. Pause migrc from + * expanding its pending queue temporarily. + */ + if (!migrc_paused) { + migrc_pause(); + migrc_paused = true; + migrc_flush_free_folios(); + } + /* Caller is not willing to reclaim, we can't balance anything */ if (!can_direct_reclaim) goto nopage; @@ -4330,6 +4341,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, warn_alloc(gfp_mask, ac->nodemask, "page allocation failure: order:%u", order); got_pg: + if (migrc_paused) + migrc_resume(); return page; }