From patchwork Fri Aug 16 09:04:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13765781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 758F5C531DC for ; Fri, 16 Aug 2024 09:05:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5242B6B00AB; Fri, 16 Aug 2024 05:05:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 484598D0059; Fri, 16 Aug 2024 05:05:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 295606B00AF; Fri, 16 Aug 2024 05:05:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F06EF8D0059 for ; Fri, 16 Aug 2024 05:05:13 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A18B281A1E for ; Fri, 16 Aug 2024 09:05:13 +0000 (UTC) X-FDA: 82457524506.05.99D4728 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf20.hostedemail.com (Postfix) with ESMTP id CF96F1C000B for ; Fri, 16 Aug 2024 09:05:10 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723799098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JZgsubj0IZR+728WrHLKeUMAqlRqiJF4zEKWK7JasHw=; b=dHNZs/RuFPhBOGEcInFchup8vNAKnDwAUUp8ff0wi2eSPp4B5RFxDYDPwA6mYJm63kOX3D LBp0M0OJtaedIYOsmfEnrsOShj//YNfwoHUuxwhhQpZwMLooUBNOML0ZW1FiC4wnCem/w5 Hqyw/LX/sBQqSQ7t/NRDL7RSAYpR3RA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723799098; a=rsa-sha256; cv=none; b=Y65q9BHhMU5dCVQxZ1cYkZ820naDO4zFcXSXeJ6NNS0wyczwc8Z0LVAewDqTl0FOvd5xyv a9xwE/b++QC4ZfkFuun7QyPBbxywjn2/g2qPFSMNIkfUVBX9tSVD1w6Y2rAHchb2jmpEUn uFtzrb75Js7FhcxY5UZac5V05kV6Kwo= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WlbYW18HHzQq1Q; Fri, 16 Aug 2024 17:00:31 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id AC8851401E9; Fri, 16 Aug 2024 17:05:07 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 16 Aug 2024 17:05:07 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Oscar Salvador , Miaohe Lin , Naoya Horiguchi , , Kefeng Wang Subject: [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list() Date: Fri, 16 Aug 2024 17:04:34 +0800 Message-ID: <20240816090435.888946-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240816090435.888946-1-wangkefeng.wang@huawei.com> References: <20240816090435.888946-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Stat-Signature: qexpw63scweqzgj9dxfbben1izpu7fre X-Rspamd-Queue-Id: CF96F1C000B X-Rspamd-Server: rspam11 X-HE-Tag: 1723799110-801107 X-HE-Meta: U2FsdGVkX1/3HIL9gfKeG1/KBOBMzibHRDb0Ov1ftx0I3kvEF4ik9XFGr/o4Ty0bDcbusby+WyV/WyV/5tLxJWxKXm2kjYhIuZ2HOunx5nZtm9KoWVVuQsUSXhx3lI91wsELYSG/0mPxpsMO8N8txs55/T4cpGcx3ff/2KuRcQM3SSIcL5G6lpeAmagsT5iJXMWvPy5dLoLd9M9KWd1YE/fCffo6pjPC0OyYXF6NOeRPJEFHaCLI6iCLQImM6xauNWnuURrcL9BYS2ehtnD6HI7isBnLHMzJSisIcPuE8PlUCnSm43ZWNXQQtyNVe24IYAkfTXUC88sYY4/UuE5JJbpAahxgvjckBBbtvzHuKYAnVPcR1ADLlo95VpyK+t3wIlXwlHfmgul1ElGGdtvZg2tTZ6Hdr/HMLXFJUjJ2NPxpiwbf5n23Tn/UjAPwBfWDqU1wr52uSMZnlicCXxOGIa2904O0ZjINGtmRGznLlbA7Ma3u8rQmprt/cZpNzG/FKBqPFekb5gKEeUEEoY6AkG98zXAd7zSlFr9bkbtDhT4B8HHkJN6a/4UNzek76qO9e74Q4s3w5aREsodVlcOWlUUx1CK/f6axlyyRzPH/AThqLrcfK9tCpdABVQaT8X+qw2uC8ifMH9c+bcQAR22WXWYxXiHWt3A7RK2RFnrXv/5MVRVat8g0ynmmgEoG9iO4E/kqYL40hI7C5kLrfHTat6oj+V4ISZEIxOAk3vwhgvuAo3+shpXg5y5bKG2wxrzMBN+6YCAestunqrxOTqPXw0A92/g5+aJKbUBoNQpW36peCTyn5HEVspI115eFZA+sGbVDz2ChHchX9j0+rPyGLn26oxLdzHMobsR0FtiETqJg26lFaoYNR+9FTh9tSV9xES1i5gdhYIMPteBZYdbtsaAXIYYzMF0tnXOG2xQX2jR+BQQzCt5dgUDWYc+IoDGWc/TjjhdaZC827fzYueR ZU1BHzoy dAMtTIZNRVdVWoJmJgOCwS0mEHIsZX/WyTQrLv5r3txejbiDqYdVQzSVGz8Kj3YBQBSnJPVEq5/pKK52+TD6BaCUZ1MCyxUolB/itWXIJynPkVdYKeBIIzAPCAODLURqOCnVqrDu1yPPWE00Nd85xZImUOICjcyRi13z5bR+vCilzHPLKcfoJVyx8tJbGe+1TvyOpFegA4YVS9q5PD1z+6u1BnlJgYdzOpfypvr4ySima/FAJtBLw8T+rkHOYE0trthBxnAQfg/pHJd0CwQjBS52ihGecNcwt6grh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add isolate_folio_to_list() helper to try to isolate HugeTLB, no-LRU movable and LRU folios to a list, which will be reused by do_migrate_range() from memory hotplug soon, also drop the mf_isolate_folio() since we could directly use new helper in the soft_offline_in_use_page(). Acked-by: David Hildenbrand Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 3 +++ mm/memory-failure.c | 46 ++++++++++------------------------------- mm/migrate.c | 27 ++++++++++++++++++++++++ 3 files changed, 41 insertions(+), 35 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 644be30b69c8..002e49b2ebd9 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool isolate_folio_to_list(struct folio *folio, struct list_head *list); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list) + { return false; } static inline int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 93848330de1f..d8298017bd99 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2659,40 +2659,6 @@ EXPORT_SYMBOL(unpoison_memory); #undef pr_fmt #define pr_fmt(fmt) "Soft offline: " fmt -static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist) -{ - bool isolated = false; - - if (folio_test_hugetlb(folio)) { - isolated = isolate_hugetlb(folio, pagelist); - } else { - bool lru = !__folio_test_movable(folio); - - if (lru) - isolated = folio_isolate_lru(folio); - else - isolated = isolate_movable_page(&folio->page, - ISOLATE_UNEVICTABLE); - - if (isolated) { - list_add(&folio->lru, pagelist); - if (lru) - node_stat_add_folio(folio, NR_ISOLATED_ANON + - folio_is_file_lru(folio)); - } - } - - /* - * If we succeed to isolate the folio, we grabbed another refcount on - * the folio, so we can safely drop the one we got from get_any_page(). - * If we failed to isolate the folio, it means that we cannot go further - * and we will return an error, so drop the reference we got from - * get_any_page() as well. - */ - folio_put(folio); - return isolated; -} - /* * soft_offline_in_use_page handles hugetlb-pages and non-hugetlb pages. * If the page is a non-dirty unmapped page-cache page, it simply invalidates. @@ -2744,7 +2710,7 @@ static int soft_offline_in_use_page(struct page *page) return 0; } - if (mf_isolate_folio(folio, &pagelist)) { + if (isolate_folio_to_list(folio, &pagelist)) { ret = migrate_pages(&pagelist, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL); if (!ret) { @@ -2766,6 +2732,16 @@ static int soft_offline_in_use_page(struct page *page) pfn, msg_page[huge], page_count(page), &page->flags); ret = -EBUSY; } + + /* + * If we succeed to isolate the folio, we grabbed another refcount on + * the folio, so we can safely drop the one we got from get_any_page(). + * If we failed to isolate the folio, it means that we cannot go further + * and we will return an error, so drop the reference we got from + * get_any_page() as well. + */ + folio_put(folio); + return ret; } diff --git a/mm/migrate.c b/mm/migrate.c index 6e32098ac2dc..7b7b5b16e610 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -178,6 +178,33 @@ void putback_movable_pages(struct list_head *l) } } +/* Must be called with an elevated refcount on the non-hugetlb folio */ +bool isolate_folio_to_list(struct folio *folio, struct list_head *list) +{ + bool isolated = false; + + if (folio_test_hugetlb(folio)) { + isolated = isolate_hugetlb(folio, list); + } else { + bool lru = !__folio_test_movable(folio); + + if (lru) + isolated = folio_isolate_lru(folio); + else + isolated = isolate_movable_page(&folio->page, + ISOLATE_UNEVICTABLE); + + if (isolated) { + list_add(&folio->lru, list); + if (lru) + node_stat_add_folio(folio, NR_ISOLATED_ANON + + folio_is_file_lru(folio)); + } + } + + return isolated; +} + static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, struct folio *folio, unsigned long idx)