From patchwork Mon Mar 11 19:36:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13589297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6134EC5475B for ; Mon, 11 Mar 2024 19:36:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5B266B0114; Mon, 11 Mar 2024 15:36:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0A9F6B0115; Mon, 11 Mar 2024 15:36:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAB4C6B0116; Mon, 11 Mar 2024 15:36:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id ABCE16B0114 for ; Mon, 11 Mar 2024 15:36:48 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6D7AE120346 for ; Mon, 11 Mar 2024 19:36:48 +0000 (UTC) X-FDA: 81885765696.07.D9E69F3 Received: from fout8-smtp.messagingengine.com (fout8-smtp.messagingengine.com [103.168.172.151]) by imf14.hostedemail.com (Postfix) with ESMTP id 7399A100005 for ; Mon, 11 Mar 2024 19:36:46 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="o 6K2G/D"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=g++Kpc02; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf14.hostedemail.com: domain of zi.yan@sent.com designates 103.168.172.151 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710185806; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=dGukwC2LyuCQ/OI2VLjd+cSQB2fydh8T1EPdNGlVsRE=; b=4bUedF51UAKxXQ1X4SdkLmofzsTzXtIod3pxboA/SKkAWSAc6iqvjKR+g2V2IN0tE6mq0J UCIqQzCm0p5YSJ0X1CoQB/IqQsoKa6E/XZiiQuYdzl8uIm4pT3pDyXncxOvbIXRGIzggY6 KiI3m3bKfSOO9+yDRhFCgVcLuDb1PcQ= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="o 6K2G/D"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=g++Kpc02; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf14.hostedemail.com: domain of zi.yan@sent.com designates 103.168.172.151 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710185806; a=rsa-sha256; cv=none; b=MCLzb/PwLlqRH5D5DBuiNWYin+rhk303SXiaAG6J5HnBdJPdHt8lJdbCgHBgqSx0vieR+7 A9EVXZ9POkuroJ8o2cqOdyFdkXQdxistdigbKU2AveiKjFRUy2pCygSKDY9CsDws6QiD6O l1wBMgHvF7uBWSdPWKMH4SgrdHxoB8E= Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailfout.nyi.internal (Postfix) with ESMTP id B846113800D1; Mon, 11 Mar 2024 15:36:45 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 11 Mar 2024 15:36:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:message-id:mime-version:reply-to :reply-to:subject:subject:to:to; s=fm1; t=1710185805; x= 1710272205; bh=dGukwC2LyuCQ/OI2VLjd+cSQB2fydh8T1EPdNGlVsRE=; b=o 6K2G/DJ5LagUkehXa5ADJ7WTPVdfVXlGzW2zy+eGBuGT6QI2MrSS0DmIJn69D7Cw +U/fWwj6XXBsxVjME6kwoaYXXUc3yXOWNFGnWl+URrl0jFzHr2rbIBsFgeTFZMpb g2GSHEQ8SGOXEmeB8lvEeGD2x4+R/EFfeFIdQFqh7cYrbNKHib5guFDk/M1OE8Oh IdYyhOLDCzt2LatWJYDas/wIP79UMrU84p5wx9xUR6L+HpI65mGIFJ56Pw5CSyp+ 40AovbgNjugSsAj9YyiAcBvjBBddy7GPgN6IO3m/eJt/LmVPgTu8hTJ0gZ26Xd4f UynonsCZwZNFoQz19mPBA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:message-id:mime-version:reply-to:reply-to :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1710185805; x=1710272205; bh=d GukwC2LyuCQ/OI2VLjd+cSQB2fydh8T1EPdNGlVsRE=; b=g++Kpc02SIO1PTbw1 vrCWbW/5SB1UYIaHipoExzoYFFHrfFpVPVxC2zEca0hKPnjp1c3TzL0Tptxk0+bd TYlGQoRNGFTpC9pUMdgr4iWUv3oWK2DtzlfZYzSL1eE0tG3NbytjUzIuoQaowXEQ AHrZJttHIIdLc0cANlxUvpjQCtYOWi/jXT2DiAdUoh7LD01tIMFrU1w8sTvn7pbj ANE8VfsXm2YOKmIGgGppdkSNoCHL4oDxNHO21tkLMqpfvrPdrZKje+DgjvdSdY7X wiSNqIWES8cdOhKNS5BiUT5ULqxIu4izmtUEYZqf2BjsdvgHLctAWl5wuM5wcLWA arRTg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrjedugdduvdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofhrgggtgfesthekredtredtjeenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepteduve ehteehheeiteeihfejveejledtgfdvieeuiedutefftdevtdfhteevtdffnecuffhomhgr ihhnpehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 11 Mar 2024 15:36:44 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: Zi Yan , Andrew Morton , "Matthew Wilcox (Oracle)" , Yang Shi , Huang Ying , "Kirill A . Shutemov" , Ryan Roberts , linux-kernel@vger.kernel.org Subject: [PATCH] mm/migrate: put dest folio on deferred split list if source was there. Date: Mon, 11 Mar 2024 15:36:41 -0400 Message-ID: <20240311193641.133981-1-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 7399A100005 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: rje1xwde1uidze1x4zg5ebdfhho7hxxh X-HE-Tag: 1710185806-920223 X-HE-Meta: U2FsdGVkX18oWUuBiibEFgU05L9lQeB4TN+3JhFDnAmQXJfXwUwSfVpxHSmPhmA2I+UhS28aJscK0QqBNHMZRiaT+AXb4LNTv28+C/wd0vSkF4jzaawY7FUgFfx8ju9S2S4vJhSbvRZfKKCelm0a5LXmIkty30gv2dzA5N/Lgt5yOGR+wa/oZgxyI1vLYb71TioWMiR/CiA5jtSobZmBrKEhMxdqlsD3JZvTfHIirOdVYdgqapOLv+7HCUj4pNJQvyMXggBgk2EDCWZvnPmVO5n56673yzqb5T0sNmms3ahPGtSElChA/xLs4zZ+LXb9sCHk7UhwzeH8OEZ3+W21DDQESYN+9+Ym9djHZhtggJeKNuVqfldmio3fB5fii2JAeEX8TdxpI7MA97siZksRZTizfnIoQzbN5seNlszBOTHRWsSLkX1kbnCQwtrC5/C7Kl0uDUz4se3jFUJ9hrFfxG2GvDVmBw1IQMOQm1DuGE1Ep4giIFrqfGAFjk9nUDCoGgCs2yerNT7H6R3EyHlZvXBSbWYTfqIhHDPH0VVw6UW9LW9ZTmk6a7tKQOpN/iyL1IV1VgqEgBjYNQLUI1UenKi+1hOOzPYkaj/wvcPY8NH/yByQKrM6p5zmdh5cD1IAqEEKsq73QTTIznRhQbtmI2qaK/4YdQJjZtyxzcU3YLpBjB7cLVQ/F02+2XpbPoHmN5MWdPWuG9lDytA5SUmSWfLvIP1U0XDJgBYqMnPh3yFz/q3QSWNfUi/ECmK25/Os5qzXij/DBWQZAW7wheR8fK4arnZwiwtmxefnH9ENbuNeBTrTN2dRsBnoFOQXoVDbtc5AlLdulq2YmfB4A0T5ue2RjDnzTknvosum5tIwx91IquSoPWJ6+mK7dMIjzen+a3UqWue9ZfZELY4KCf4ppgxyRaHSS/7isqiZkQIX+/MO2B25IeLHbb8aVByOIClkEOqsSCHsV8BAjSWSnbB bv2wktxK buYsaeM060mMQIPVHXjcCNEaDGTdjd+BqNO8YKIvefXEuvVILUWDxhdnwGOtBqgFcqojIro4vJ/7q0hfkpZpDJZw2ViE24tFI2D212CRBzmxnlmw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zi Yan Commit 616b8371539a6 ("mm: thp: enable thp migration in generic path") did not check if a THP is on deferred split list before migration, thus, the destination THP is never put on deferred split list even if the source THP might be. The opportunity of reclaiming free pages in a partially mapped THP during deferred list scanning is lost, but no other harmful consequence is present[1]. Checking source folio deferred split list status before page unmapped and add destination folio to the list if source was after migration. [1]: https://lore.kernel.org/linux-mm/03CE3A00-917C-48CC-8E1C-6A98713C817C@nvidia.com/ Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path") Signed-off-by: Zi Yan --- mm/huge_memory.c | 22 ---------------------- mm/internal.h | 23 +++++++++++++++++++++++ mm/migrate.c | 26 +++++++++++++++++++++++++- 3 files changed, 48 insertions(+), 23 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..c6d4d0cdf4b3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -766,28 +766,6 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) return pmd; } -#ifdef CONFIG_MEMCG -static inline -struct deferred_split *get_deferred_split_queue(struct folio *folio) -{ - struct mem_cgroup *memcg = folio_memcg(folio); - struct pglist_data *pgdat = NODE_DATA(folio_nid(folio)); - - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; -} -#else -static inline -struct deferred_split *get_deferred_split_queue(struct folio *folio) -{ - struct pglist_data *pgdat = NODE_DATA(folio_nid(folio)); - - return &pgdat->deferred_split_queue; -} -#endif - void folio_prep_large_rmappable(struct folio *folio) { if (!folio || !folio_test_large(folio)) diff --git a/mm/internal.h b/mm/internal.h index d1c69119b24f..8fa36e84463a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1107,6 +1107,29 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); +#ifdef CONFIG_MEMCG +static inline +struct deferred_split *get_deferred_split_queue(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + struct pglist_data *pgdat = NODE_DATA(folio_nid(folio)); + + if (memcg) + return &memcg->deferred_split_queue; + else + return &pgdat->deferred_split_queue; +} +#else +static inline +struct deferred_split *get_deferred_split_queue(struct folio *folio) +{ + struct pglist_data *pgdat = NODE_DATA(folio_nid(folio)); + + return &pgdat->deferred_split_queue; +} +#endif + + /* * mm/mmap.c */ diff --git a/mm/migrate.c b/mm/migrate.c index 73a052a382f1..84ba1c65d20d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -1037,7 +1038,10 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, enum { PAGE_WAS_MAPPED = BIT(0), PAGE_WAS_MLOCKED = BIT(1), - PAGE_OLD_STATES = PAGE_WAS_MAPPED | PAGE_WAS_MLOCKED, + PAGE_WAS_ON_DEFERRED_LIST = BIT(2), + PAGE_OLD_STATES = PAGE_WAS_MAPPED | + PAGE_WAS_MLOCKED | + PAGE_WAS_ON_DEFERRED_LIST, }; static void __migrate_folio_record(struct folio *dst, @@ -1168,6 +1172,17 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, folio_lock(src); } locked = true; + if (folio_test_large_rmappable(src) && + !list_empty(&src->_deferred_list)) { + struct deferred_split *ds_queue = get_deferred_split_queue(src); + + spin_lock(&ds_queue->split_queue_lock); + ds_queue->split_queue_len--; + list_del_init(&src->_deferred_list); + spin_unlock(&ds_queue->split_queue_lock); + old_page_state |= PAGE_WAS_ON_DEFERRED_LIST; + } + if (folio_test_mlocked(src)) old_page_state |= PAGE_WAS_MLOCKED; @@ -1307,6 +1322,15 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, if (old_page_state & PAGE_WAS_MAPPED) remove_migration_ptes(src, dst, false); + if (old_page_state & PAGE_WAS_ON_DEFERRED_LIST) { + struct deferred_split *ds_queue = get_deferred_split_queue(src); + + spin_lock(&ds_queue->split_queue_lock); + ds_queue->split_queue_len++; + list_add(&dst->_deferred_list, &ds_queue->split_queue); + spin_unlock(&ds_queue->split_queue_lock); + } + out_unlock_both: folio_unlock(dst); set_page_owner_migrate_reason(&dst->page, reason);