From patchwork Thu Apr 18 06:15:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13634193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8601EC4345F for ; Thu, 18 Apr 2024 06:16:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E46B56B0085; Thu, 18 Apr 2024 02:15:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE4856B0089; Thu, 18 Apr 2024 02:15:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 970C06B0088; Thu, 18 Apr 2024 02:15:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6CC786B0088 for ; Thu, 18 Apr 2024 02:15:57 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1DB624084E for ; Thu, 18 Apr 2024 06:15:57 +0000 (UTC) X-FDA: 82021641954.17.AFB42B8 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf05.hostedemail.com (Postfix) with ESMTP id 4578E100007 for ; Thu, 18 Apr 2024 06:15:54 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713420955; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=AuhJKxtzmEZVc6Pxmxhr0Z99w6rciMaTZAPlyI7sKc8=; b=3vgsHZ358MdfEFBlvERDIhqnSGamivnUJsoVM2n9nCGRO0EPkLFz7vQZEd845HXLGA+hTr Jl+uA1DFyeJsO3b5tXCWy3GGnuvBr3QR0v4+Frpv2QgcvyOOkXxfZkOe5l8IgFHY34hDwG K1dUPUS0B3lyhY721OPecXkZHD+3VVg= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713420955; a=rsa-sha256; cv=none; b=k4oadSqY5mUHSGot1f6OFjhvBPXP3FyScPKVvxRiY7akQ/Zcmvf+eo6QcnNpnZudzeJytm 2E1tBbwWOAqYWdwThxnym9iuob/qlze/KpE7Tngdyo2MR/QHyTP1YBV7ucV4ccJmnHNSUu S02wqcZNG8zZkz9XrWeGtTUmog1Mq5k= X-AuditID: a67dfc5b-d6dff70000001748-11-6620ba936039 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Date: Thu, 18 Apr 2024 15:15:33 +0900 Message-Id: <20240418061536.11645-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoe7kXQppBu1tPBZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLyrjlsFvfW/Ge1OL9rLavFjqX7mCwuHVjAZHG89wCTxfx7n9ks Nm+aymxxfMpURovfP4CKT86azOIg4PG9tY/FY+esu+weCzaVemxeoeWxeM9LJo9NqzrZPDZ9 msTu8e7cOXaPEzN+s3jMOxno8X7fVTaPrb/sPBqnXmPz+LxJLoAvissmJTUnsyy1SN8ugStj 75yjTAXftSs+397J1MA4XbmLkZNDQsBE4sTf6eww9plTvWwgNpuAusSNGz+ZQWwRATOJg61/ wGqYBe4ySRzoB6sRFkiVaJ78A8xmEVCVaNo0nwXE5hUwldh7ooEJYqa8xOoNB8DmcALN6X9/ iLGLkYNDCKjmwt+QLkYuoJL3bBJP999jhqiXlDi44gbLBEbeBYwMqxiFMvPKchMzc0z0Mirz Miv0kvNzNzECA39Z7Z/oHYyfLgQfYhTgYFTi4T15QD5NiDWxrLgy9xCjBAezkghvi7BsmhBv SmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWpRTBZJg5OqQZGdX7XsynZE0+E Z/vu+VjVmMx/y/vtjfzZO0yXC9i+6N/mfKhhsvvUcyv4JPhq56RtEeAUC/hRdPdWwYn+yBjh jvW3L19gL+d5ZmL6wkGqpO/S3nitrA39X9ovF9udNGnaY9VddmMH+2e37lCWJU3KGzkcM0uV ezQmb5s0cdHTT2/Z17HpG+gpsRRnJBpqMRcVJwIAKYYWW3gCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRmVeSWpSXmKPExsXC5WfdrDt5l0KawYdiiznr17BZfN7wj83i xYZ2Rouv638xWzz91MdicXjuSVaLy7vmsFncW/Of1eL8rrWsFjuW7mOyuHRgAZPF8d4DTBbz 731ms9i8aSqzxfEpUxktfv8AKj45azKLg6DH99Y+Fo+ds+6yeyzYVOqxeYWWx+I9L5k8Nq3q ZPPY9GkSu8e7c+fYPU7M+M3iMe9koMf7fVfZPBa/+MDksfWXnUfj1GtsHp83yQXwR3HZpKTm ZJalFunbJXBl7J1zlKngu3bF59s7mRoYpyt3MXJySAiYSJw51csGYrMJqEvcuPGTGcQWETCT ONj6hx3EZha4yyRxoB+sRlggVaJ58g8wm0VAVaJp03wWEJtXwFRi74kGJoiZ8hKrNxwAm8MJ NKf//SHGLkYODiGgmgt/QyYwci1gZFjFKJKZV5abmJljqlecnVGZl1mhl5yfu4kRGMTLav9M 3MH45bL7IUYBDkYlHt4TB+TThFgTy4orcw8xSnAwK4nwtgjLpgnxpiRWVqUW5ccXleakFh9i lOZgURLn9QpPTRASSE8sSc1OTS1ILYLJMnFwSjUw7jqRr3HKa6OsD7fHqjuLFfZEbDptccVF 4N7hLGFJ8RX609df+7ruYJFkyNU5kotTlT6/qcqZqBDO8n9r3Oc5+Vwvr5UEXrQqYcyvnt1c sm7Gi23HLT7re6ofyF7IysQ47Q/LTa3s+f80nJ5FZ2nUmbo9FBGKu2rokNl3IkNEyd32dsba 2e2rlFiKMxINtZiLihMB6iLaqV4CAAA= X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4578E100007 X-Stat-Signature: i9ty5rt67pbqdc7jbe6osjqcm3ac6rhd X-HE-Tag: 1713420954-69323 X-HE-Meta: U2FsdGVkX1+WP2+YJ+2/OuJpmdOmL+i2seHS9HF2A886YwGGFQ0kwsPvjCWvtcKqq7C/TV/z8kogXRjC3epTSFlRUZQAziDYUoi88lkvBV1M/Ah2RP3rsmxq6CP0v++vBL3ypxn4Z/OinFeTEL/X0CcF4HcWdCYVJv2deHpWtiWUujp2PN27aGkEjpCrH0m/EuCKfHugqK0tpNniw3p11pO41kFjmJmup5Kbhn7bhsCUjy2qp24B8r3cUJpdci1tj+ic6Ollvq16zG/LXFuE7/DYM94YFOU/9mD+oEJrn3D9q1wzJbusLDjUSTGqm1CwnFOVFe9fL10TOFfNL1Eijl4Ep+1686qtOH6aHZbf1O+0PIbyURhGLKxiGAjk9IJIECjP+EjXnWmlMz0Luz31Q56cT7WY6jBZxEBXBXqXCDGV+t5X9kfuP52FkQXLk2uz/OxheYkja8395ie9tGxHnyWMiUX64rcK3YPUF/JYuzRon0s4QOs1fn9cBA1rkfOEBwiqst6wNX9af29IbSD0lNq8N5ueC2D6Viw6o7OTNW1Ncgm46Flp1ODiwQnZ+aWVz5UovQMym+PPK9p/8kA1ujwj8ef6QTEMl1c8Ips6cX+S0ot/jyZojXQLKe3ec2g1KcjMZCaieLqrTp0CQu2L5QyHRy+NSyKSuLlZXm9atghqipsENfTd3XNlTAYKRuFob9zR5svOw82wUdgdkx0vbXGgXF6lL65oHDsszWITmESQbuEx85ExYSsCeh+muSkP2BvT1PwMyu/LjTTuWYcimQUpDuQt8Q0tjRjkWPFesxpdF2LnJhhLSv9EMTsSIdsJ2s2y6/w81RkWjGAAG3vD55uVv+cxnWF7YByNwzHPuSLdyFcX3048fI5EL60vF2SAukYwSnbHmFah2L3rMnpc5c95eoeBdY8w8KVNTZsScC20L0txQ6YJuy3PzrnPoiiVZbHXxHRzn+jEDTDRK+j jA9Duyoh hsWnrn00bm43+jwljQAd6Y0eMJvCLkdL9Nj5+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Functionally, no change. This is a preparation for migrc mechanism that requires to use separated folio lists for its own handling during migration. Refactored migrate_pages_batch() and separated move/undo parts from migrate_pages_batch(). Signed-off-by: Byungchul Park --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index c7692f303fa7..f9ed7a2b8720 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; }