From patchwork Wed Jan 15 09:53:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13940177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30153C02180 for ; Wed, 15 Jan 2025 09:54:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2CAD280003; Wed, 15 Jan 2025 04:53:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADC9C280001; Wed, 15 Jan 2025 04:53:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97E50280003; Wed, 15 Jan 2025 04:53:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 79501280001 for ; Wed, 15 Jan 2025 04:53:59 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 35F8D1C8277 for ; Wed, 15 Jan 2025 09:53:59 +0000 (UTC) X-FDA: 83009224998.16.8F977D4 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf24.hostedemail.com (Postfix) with ESMTP id C1C8A18000C for ; Wed, 15 Jan 2025 09:53:56 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736934837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=5yUllx2MUx+tUW/QP+pdFw3HbW01+CFHnrXOetuyOIQ=; b=TypfbL3wHBSE3+SLxHxsvUwXeWQ89IjeWY8PTKUvMYZPatb5tLNv+M6d/4wvqRHDkeYHAv KeRe57zy3saTXDuvKwj2i89boXIBVnApwxusHQ6WNuYE1uVthRd6kuQvqLp2Ayik3VddS0 lkucmA57apGfpHVUE6lgggAIUd28h90= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736934837; a=rsa-sha256; cv=none; b=3EEJYtKaux27IaRXAWZqYBd6qLhH8nJbWCKWfm4O4yZoQuuJH9FRYPvivCNe+UiMK1YatM 31aBRzigxo3cU9LZsQI9Hm5PSIQamvbMlJRPrN+5aBlNW/w+CMpmzeWb/QG3roS6T7+Eof UR/+EnQyUu85nn3gxj6b3CZG8Zw21cw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-3e1ff7000001d7ae-9a-678785b2a286 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ziy@nvidia.com, shivankg@amd.com Subject: [PATCH] mm: separate move/undo parts from migrate_pages_batch() Date: Wed, 15 Jan 2025 18:53:43 +0900 Message-Id: <20250115095343.46390-1-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDJMWRmVeSWpSXmKPExsXC9ZZnke6m1vZ0g4PXhS3mrF/DZnF51xw2 i3tr/rNanFvzmd1i9tF77A6sHq2X/rJ5bPo0id3jxIzfLB69ze/YPD5vkgtgjeKySUnNySxL LdK3S+DK2LT2I2vBJIOKidfOsjQwXlDrYuTkkBAwkVjStIIVxu558R3MZhNQl7hx4ycziC0i YCZxsPUPexcjOwezQJxEcyFIVFjAQ+LLh3fsIDaLgKrErt/rWEBsXgFTidU7NjFBTJSXWL3h ANAULiD7LKtE09UZzBAJSYmDK26wTGDkXsDIsIpRKDOvLDcxM8dEL6MyL7NCLzk/dxMjMAyW 1f6J3sH46ULwIUYBDkYlHt4L8W3pQqyJZcWVuYcYJTiYlUR4l7C1pgvxpiRWVqUW5ccXleak Fh9ilOZgURLnNfpWniIkkJ5YkpqdmlqQWgSTZeLglGpgXF7+ykvBfG9naKNbydV8iX3iM64/ 3X/6pvesjbME1cwPPFq0/oKI0J43v2cZ7lIs4T7H8Nb2Yfl88Y/3F+f8l2uQvXtpo/D8A2E/ vmtH8/amnc1IXrlvsknC5u7Y9ZX2cy7pnmbzZy3yCVxxJFXhn2aZ2RtbvavOl7ruOFfF333a 8jSWqZVtnRJLcUaioRZzUXEiAO05Rwb/AQAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrMJMWRmVeSWpSXmKPExsXC5WfdrLuptT3dYM5uNos569ewWRyee5LV 4vKuOWwW99b8Z7U4t+Yzu8Xso/fYHdg8Wi/9ZfPY9GkSu8eJGb9ZPHqb37F5LH7xgcnj8ya5 ALYoLpuU1JzMstQifbsEroxNaz+yFkwyqJh47SxLA+MFtS5GTg4JAROJnhffWUFsNgF1iRs3 fjKD2CICZhIHW/+wdzGyczALxEk0F4JEhQU8JL58eMcOYrMIqErs+r2OBcTmFTCVWL1jExPE RHmJ1RsOME9g5FjAyLCKUSQzryw3MTPHVK84O6MyL7NCLzk/dxMj0KvLav9M3MH45bL7IUYB DkYlHt4TEW3pQqyJZcWVuYcYJTiYlUR4l7C1pgvxpiRWVqUW5ccXleakFh9ilOZgURLn9QpP TRASSE8sSc1OTS1ILYLJMnFwSjUw7trMpHSR79OZLv59rgu2XD79+uKqyy5bFvjy3nn5IaV0 VZ131vV3++POvLspZ9RUs6PkavP3Mt2EDauczyVKOHqqL622NdH/bcyRa8vLuNH92YGu61vi Noay72QS3WvM/3XnUukfAWdFZUPrqu77RZ051q5Vt6jbo8e35ly/s/mWKQXi6Wo1SizFGYmG WsxFxYkADlgBbuYBAAA= X-CFilter-Loop: Reflected X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C1C8A18000C X-Stat-Signature: 8xpfpnt6hssr7u618k5ey4e6sqdetjnj X-Rspam-User: X-HE-Tag: 1736934836-66048 X-HE-Meta: U2FsdGVkX1/k6Kr/qvNSJpnGJYJMvlxKqWdpL+HWxcWG1Hi7sW26GZ0UJdielU7g7W+m0ap0Cwn4/1DtXTC7gEwYC/+pN3aC/SwA6R3l+g3j4Sko9EIJa4W5ubQ79rN7U9g0pCvQSB68SkGpDR0ZTg/PsoFUZ3uBCeBHJWnlBqcpg3OpG9IQZFWdwv6wcytiIcl2NiY92gPsb7gRYHL131rvjlhGaX0hwjCreMClIuvCnRWlQPMMhgoC34IO74ZPO8PwXo2GDQkPfjxq3nC2oJC0i0LmEx6Ht+jnHQGt6IGVzx/GnpE1+tDwQtf02NZs91hW1mlgGAEpPutCBh7vFbBAmBe6mMi/4cf4ZVCW+mPtMrGf4ZGDTf0PWlUFePdukF4jojkZfVmCG5iBGG7/GyDLLbfwFXVYsaMLPxPBCABm9yCH3aJYQK6qWb2uPfoKndmWXmafe9kD+p1G5R0nTDow9h71yYJh+Ybrw+k4VkSHvpTk/T6uovP4ydsn7APaDzSvxZMOm8oYbm6HqhoOKHJUStiWcWgrs0OTppTsI1ON3SLwzmDD9ZO3zAH/heSYiP9MpxAqSyVEDatZQgZYROVz13Uxv7PhwfbHqSsDIE6T/bux44Sj1VEAWrPPCn5uLs7lddZ8vo/WIGq/+oZxxknXnPpQfItLEc6kVECtCNswcN/G0C7mXloyfKrQ53VkDQjL8K5RaVC6YOiznl+sHkI2woEmTaAF6r4+Ten14YnX2S0IwWyyOyohs3fJEACJ29P1sDQ5a/eh/VWG7DT+WNcFBzzeiGKAaaQ2ikwssHEE+Ddssyf5WWltcSDgjcsc5If7ePWhsdZQhX7J2melCbh+pC5Ag62I32hsAJRfJ7TuEk/UdMovuy61/HXbvtR/bkeKkmqaDnFASash977THNI0HaL/LWxtZ4GnZ7V/MZsKkcKlmCVsxruA6ohp8etZiuifXvECSaO98YxhQ5f iVyfRznB vcEJxf7IaT1dYdBtb5PaiqdngzWxJXgPrrey7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, This is a part of luf(lazy unmap flush) patchset and also referred several times by e.g. Shivank Garg and Zi Yan who are currently working on page migration optimization. Why don't we take this first so that such a migration optimization tries can use it. Byungchul --->8--- From a65a6e4975962707bf87171e317f005c6276887e Mon Sep 17 00:00:00 2001 From: Byungchul Park Date: Thu, 8 Aug 2024 15:53:58 +0900 Subject: [PATCH] mm: separate move/undo parts from migrate_pages_batch() Functionally, no change. This is a preparation for migration optimization tries that require to use separated folio lists for its own handling during migration. Refactored migrate_pages_batch() so as to separate move/undo parts from migrate_pages_batch(). Signed-off-by: Byungchul Park Reviewed-by: Shivank Garg --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index dfb5eba3c5223..c8f25434b9973 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1695,6 +1695,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1717,7 +1792,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1888,42 +1963,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1932,20 +1976,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; }