From patchwork Wed Jan 15 10:34:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13940272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49B63C02180 for ; Wed, 15 Jan 2025 10:34:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B79F6280002; Wed, 15 Jan 2025 05:34:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B2AD6280001; Wed, 15 Jan 2025 05:34:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F1D1280002; Wed, 15 Jan 2025 05:34:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80BDC280001 for ; Wed, 15 Jan 2025 05:34:38 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7E00C45C56 for ; Wed, 15 Jan 2025 10:34:18 +0000 (UTC) X-FDA: 83009326596.05.62FE210 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf15.hostedemail.com (Postfix) with ESMTP id 04BB2A000F for ; Wed, 15 Jan 2025 10:34:15 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736937257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=1ayXv8xbfCUlYVavOzKcxhajtOAwTDglHt+QzjzfHeI=; b=xFIB6Nb4xB+GShZ3bflCmqdbJdcYVLgVoglCyZCrp7OZubfVZ65MfsrMcpn1KtrTXW5Vi/ GNiP9DYJt+aZlE/hzJLXMdhrMVcy2HwX/BCWn/hprmAbp6MLFlaA5u0nwzJ/CA+omj1wbo ROJjynTdN09iJvzS9nq52ixiGstCwdU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736937257; a=rsa-sha256; cv=none; b=YLcMsfgfHRvi/spXDObc4QFlMBKbFKz8oqibDI2KdwiDyqRz4zcwq8hTgCpMTVDkjJvZ/T /3AgrTO4Gk7Mc4Zur67x/HWmmz6uUQkW8U+GeHvsJ12kC2/dMvJFFjUcmoc1HvlKyMSGUg DH7JkHaIaGrb9TCMmD/yu+4oPZEcQhU= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none X-AuditID: a67dfc5b-3c9ff7000001d7ae-1c-67878f262393 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ziy@nvidia.com, shivankg@amd.com Subject: [PATCH v2] mm: separate move/undo parts from migrate_pages_batch() Date: Wed, 15 Jan 2025 19:34:03 +0900 Message-Id: <20250115103403.11882-1-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20250115095343.46390-1-byungchul@sk.com> References: <20250115095343.46390-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpmluLIzCtJLcpLzFFi42LhesuzUFetvz3d4OUGHos569ewWVzeNYfN 4t6a/6wW59Z8ZreYffQeuwOrR+ulv2wemz5NYvc4MeM3i0dv8zs2j8+b5AJYo7hsUlJzMstS i/TtErgyFs//xFZwWq/i+oOTTA2MG1W7GDk5JARMJPbM+s0GY3+fsIgFxGYTUJe4ceMnM4gt ImAmcbD1D3sXIzsHs0CcRHMhSFRYwFti8oPDjCA2i4CqxKvdp8A6eQVMJZYf/McEMVFeYvWG A2BTOIGmXDk0lx3EFgKq6Xl2jgWi5imrxIa7nBC2pMTBFTdYJjDyLmBkWMUolJlXlpuYmWOi l1GZl1mhl5yfu4kRGDTLav9E72D8dCH4EKMAB6MSD++F+LZ0IdbEsuLK3EOMEhzMSiK8S9ha 04V4UxIrq1KL8uOLSnNSiw8xSnOwKInzGn0rTxESSE8sSc1OTS1ILYLJMnFwSjUwGk6bavVf I/rnr1+CH1yDrVQr9u7Z52n89saHTXcsH2X9D5jquFtg3dpjbOrcB7MDV4U3ytm+sdbiTUot EZHpdLpvZ/c16ciZjyfMnqroye+wW2L256r71N8n/FkVfR8tW3ePQaR62jSB39eFVysvcg5J sJnyVPyz0hnJmZsfqVfcWd+8Z23qRCWW4oxEQy3mouJEAD6FKtMWAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrNJMWRmVeSWpSXmKPExsXC5WfdrKvW355uMPsQk8Wc9WvYLA7PPclq cXnXHDaLe2v+s1qcW/OZ3WL20XvsDmwerZf+snls+jSJ3ePEjN8sHr3N79g8Fr/4wOTxeZNc AFsUl01Kak5mWWqRvl0CV8bi+Z/YCk7rVVx/cJKpgXGjahcjJ4eEgInE9wmLWEBsNgF1iRs3 fjKD2CICZhIHW/+wdzGyczALxEk0F4JEhQW8JSY/OMwIYrMIqEq82n0KrJNXwFRi+cF/TBAT 5SVWbzgANoUTaMqVQ3PZQWwhoJqeZ+dYJjByLWBkWMUokplXlpuYmWOqV5ydUZmXWaGXnJ+7 iREYBMtq/0zcwfjlsvshRgEORiUe3hMRbelCrIllxZW5hxglOJiVRHiXsLWmC/GmJFZWpRbl xxeV5qQWH2KU5mBREuf1Ck9NEBJITyxJzU5NLUgtgskycXBKNTDumvf6uvfhHpevYm+d8r7J OfqZG5prT9z6QatrWsuljJ7f4UvZIv77qRw0Eu58kpFZcpG/cXlHyxppieapv7b2RMkZHfuj oFHXHD5X4ehXw8k769vjD7iFuPq9You6bVdfqmZ7Lf+lkcPK2ZW6y1Ij70yY/HHt10hL6WUN X8/dsdTXVI8O4VJiKc5INNRiLipOBACuNPPx/gEAAA== X-CFilter-Loop: Reflected X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 04BB2A000F X-Stat-Signature: bmy665uew85ixcjaoj7freqj596t1udq X-Rspam-User: X-HE-Tag: 1736937255-89152 X-HE-Meta: U2FsdGVkX19DAxeu+3beQv+59OnpAYuNmLWiJ2kFBQw2DB9ru4mayzuRlrBm5ITVHKc+zgKW6n2T7ltSs1q/UByzG3Z50wy3Yl2HTW9c/U2fr+d7Cxl1G/YcIpp3+po3NwYfkqwxigAqYbabEYJW/kLtTUHdfsIPEOLW94hXIDF0MWRmliRfGra/u/9PsJvy92tvBRmKjTRa0kXifcPeFaycVc0Xz6JBYglj+7UV2PTkqnVuc01snz22h75Ph3cCxbluseiaDbepb/Tcw10nMuD4dfPUjIV9o/Wr4ZXoEBpFapsBiOH2iqBvL/HdzUnGgmlL3eUMt/UIZvFzKJfgVe9XDOk5qxK2w0JW34XUohqmZOxqvrpKXfaXOUf4SZ8HeN3ZqGyeZgSS1Y3yT54NXKfVU2TbHXmepQDoIdjDDKGbogSwLLiEZNwezTwXqQX3iWI68TktQ4XcSbbvgJC2Kbm5nTMNhU8UG2hBQMM+Tw61bp+qXiz3Mxuap7iLjY+v46s+ZkspmrW2D9nt8SzneJ5LRexS+hJ8FG9ib0FO1djtPI1Er1SD1dAQAdJ3jSYIpD01tZfX/MIYHyxblZ05Qbgft109I7+aaXzS8xHGyj1TnnA9NTZZOh/6EXKtOwcRjYqV1UOHNT+QeRSskbHBqfrFhdYPpIbK8Nv3ZLKZiW+rlngoICnLr9pbcf3u9XLF6tx+0vtpKi9UdqRATF+PapxIuM4hbuYKd6eXatE7gMCW60hb+nfDBVa/Y6ebIfiygDq7CUhWS4Ubs8VhJ0T6JTab2sIXChMd5JoZpEXuSQY6UbpQFPQpz9lBcrEljQBhP9lGHDOV9LDYRjEcJM1/tc0jorMfaqen+ekVn5JFk8SxmPhUj5npB66GNCm1pj/H+TxFa6OQ1dQHyfL8rQf4HqziJ6zA7RWZMzNF48Q7aShmE3KNLw2MwC3ZFUYScQoqBuYxzzxIfRl4k/Jf9DL GGboP0ih secf+FGMuHZI3oaxUgZGN6h53WdeLi/4uxefWMddGs8UsQlv5flrUdZOX+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Changes from v1 - Fix a wrong coding style --->8--- From a65a6e4975962707bf87171e317f005c6276887e Mon Sep 17 00:00:00 2001 From: Byungchul Park Date: Thu, 8 Aug 2024 15:53:58 +0900 Subject: [PATCH v2] mm: separate move/undo parts from migrate_pages_batch() Functionally, no change. This is a preparation for luf mechanism that requires to use separated folio lists for its own handling during migration. Refactored migrate_pages_batch() so as to separate move/undo parts from migrate_pages_batch(). Signed-off-by: Byungchul Park Reviewed-by: Shivank Garg --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index dfb5eba3c5223..c8f25434b9973 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1695,6 +1695,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch (rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1717,7 +1792,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1888,42 +1963,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1932,20 +1976,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; }