From patchwork Sat Aug 17 08:49:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13767070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66185C531DE for ; Sat, 17 Aug 2024 08:54:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C9E86B0282; Sat, 17 Aug 2024 04:54:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78AEE6B0290; Sat, 17 Aug 2024 04:54:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36E236B0283; Sat, 17 Aug 2024 04:54:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C901D6B017B for ; Sat, 17 Aug 2024 04:54:07 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 62F641402C0 for ; Sat, 17 Aug 2024 08:54:07 +0000 (UTC) X-FDA: 82461125334.08.16D07F6 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf23.hostedemail.com (Postfix) with ESMTP id BFC2B14000B for ; Sat, 17 Aug 2024 08:54:04 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723884770; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OGnlXOuzeF9DWayfTGT5N92iiXXehz6xI1tDTpAiTb4=; b=EzRnE/slNuTs/sJ4Gxe/tWrCN2U+xQZUGu4ogiianbv2BEQHjZKNTRKAmhtNgON2cSmODU xpMjucJMrDJhabxbBUoWkiQXKR05PGs2uwea5Z/A1+608YNfOcj1pjmroFFNGuB4kHawx0 tlIH4cpdWGrwk7N4w9QUWNSS+V/a0gM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723884770; a=rsa-sha256; cv=none; b=yIvEegDYKqdPof1sU0n20dzaa8Vcc5vrqeboVVgOlm0d48iNqkWlDWzbi6kPcDQXC6/sjC 2ho1MPVddQtTQCCv8vLlIWFDquTXGTm26Jo3G8OdYxkgVNXfMW8WrxeQi3RfToonzuU957 lENZx0aCAkePrNZCm2V7eK9flnXT0X0= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WmCHx3Vxkz1HGLJ; Sat, 17 Aug 2024 16:50:53 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 17C54140138; Sat, 17 Aug 2024 16:54:01 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 17 Aug 2024 16:54:00 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Oscar Salvador , Miaohe Lin , Naoya Horiguchi , , Kefeng Wang Subject: [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list() Date: Sat, 17 Aug 2024 16:49:40 +0800 Message-ID: <20240817084941.2375713-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240817084941.2375713-1-wangkefeng.wang@huawei.com> References: <20240817084941.2375713-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Stat-Signature: dwkhnxcqs4c6s8dobe7z3g7sa9mi5d59 X-Rspamd-Queue-Id: BFC2B14000B X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1723884844-49500 X-HE-Meta: U2FsdGVkX19r204XQs6ZcXN5CfQupf7CJ3QTtQKOIEtZLZoG27N9rl8oui8RfcvhBDiItNVCxcxPu9bYDtXfV6M56lTSmn49RJgnq55BP4e18d/4xubC8+VMc82nu4UrN/aBtVeTrRPsKvUp9XJxPc6iyiHGxnLEpYpd0Tp3JZjcS5ITzzd4xvPnj9w/FiPcfc8u8t6YKagsHx9kyEU/O796RDBp/PYmCTKp5RhVxHVAR844JQM20ZRfsG6IP8lpGPm1XeXKsb2EoGZn6QagIO3Xxb245FnGoAZDSDFlaBIszX0RYDClxkiRo64Ipozf4WIFiwUri/rn7s72Gt2Zsb1io/trSjqIzfK4HWe4NOwaYH8HFjGdBbkMd3cc9+F7c/fgNZSzjvHta8cLiK6ihA5NP+tchT7i4xX2Y8G2BEPVCETmgp6jg5FYay86QhGhIhjnnorZLcZ/Fq4BsvFJ+n6bxZZ1gvPPe6aGQHX1MXYZJDVyy6oirXafNrWZ2E+LxnZL+LgmfVMRn9hZ4r+rB+7DTa0pH+qbLiBaooaMyXQhAMpsSvQ4kGUHqEHTBwM7sYQRZhewYDMAeuf5YbkJa3rnbrazeP9sMuRz9nveBr1G6dN6rXyq1kebBKuWlcbq6uMkuGKKs6mMp/w1aEqabvV1FqT1lOcC0M4ZGIJ4QakK0EbX2e7ZMEfgJX14X4DQocIHtHqjvhDAIkyYNSGHadpAovrJ72SvYu/P2bjNKyaXpLHoY+cYJlf7M1qRPVyQeXXhQAsXlMFaRIj07EuNzc2UOo86wsxhs0AxU3t2kNCKawEZKsGFt8fA68YJ5ec2jN2SniftN/V2wDVyWXWhrC4gmdr5XNQD57eTqPCXvBA7Humu1pOmqiQNyoeU8UJWdAZ7QFME8sO66PVjDJAisUZe/UZM8I8J36dkucEKdfZIaKJdujaTStrNxfOAyk8DM3HjsITr9aCAsysvv5J 3ONonnKO 8ixEl3s0qJk5uL4GQk2d6TqIdbj82q/be5D4BEIa7nluUAuWW8Fv1qnh0kNMOUPILzFmgEulRj8+S7DlvD41d5mUTMoe6iGzzJRFf7NO/7s4d9kx6dvZ55JYlabctA7aCN5Nsba7NdVzyS4ZAeFEKuvXktbvc8cFUbGJlW/rc8UA+gPq4fBM6mKFn1wIst0v1/h3Pj4uRawY/78JDiTemcrx0fzk7YzUluC+XOTPq05F2bI3RLKheMIzAwwz39zpo4wsII8P2CzKP4jv20K1ZjjhkW0g88gDKnvu+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add isolate_folio_to_list() helper to try to isolate HugeTLB, no-LRU movable and LRU folios to a list, which will be reused by do_migrate_range() from memory hotplug soon, also drop the mf_isolate_folio() since we could directly use new helper in the soft_offline_in_use_page(). Acked-by: David Hildenbrand Signed-off-by: Kefeng Wang Acked-by: Miaohe Lin Tested-by: Miaohe Lin --- include/linux/migrate.h | 3 +++ mm/memory-failure.c | 46 ++++++++++------------------------------- mm/migrate.c | 27 ++++++++++++++++++++++++ 3 files changed, 41 insertions(+), 35 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 644be30b69c8..002e49b2ebd9 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool isolate_folio_to_list(struct folio *folio, struct list_head *list); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list) + { return false; } static inline int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 93848330de1f..d8298017bd99 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2659,40 +2659,6 @@ EXPORT_SYMBOL(unpoison_memory); #undef pr_fmt #define pr_fmt(fmt) "Soft offline: " fmt -static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist) -{ - bool isolated = false; - - if (folio_test_hugetlb(folio)) { - isolated = isolate_hugetlb(folio, pagelist); - } else { - bool lru = !__folio_test_movable(folio); - - if (lru) - isolated = folio_isolate_lru(folio); - else - isolated = isolate_movable_page(&folio->page, - ISOLATE_UNEVICTABLE); - - if (isolated) { - list_add(&folio->lru, pagelist); - if (lru) - node_stat_add_folio(folio, NR_ISOLATED_ANON + - folio_is_file_lru(folio)); - } - } - - /* - * If we succeed to isolate the folio, we grabbed another refcount on - * the folio, so we can safely drop the one we got from get_any_page(). - * If we failed to isolate the folio, it means that we cannot go further - * and we will return an error, so drop the reference we got from - * get_any_page() as well. - */ - folio_put(folio); - return isolated; -} - /* * soft_offline_in_use_page handles hugetlb-pages and non-hugetlb pages. * If the page is a non-dirty unmapped page-cache page, it simply invalidates. @@ -2744,7 +2710,7 @@ static int soft_offline_in_use_page(struct page *page) return 0; } - if (mf_isolate_folio(folio, &pagelist)) { + if (isolate_folio_to_list(folio, &pagelist)) { ret = migrate_pages(&pagelist, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL); if (!ret) { @@ -2766,6 +2732,16 @@ static int soft_offline_in_use_page(struct page *page) pfn, msg_page[huge], page_count(page), &page->flags); ret = -EBUSY; } + + /* + * If we succeed to isolate the folio, we grabbed another refcount on + * the folio, so we can safely drop the one we got from get_any_page(). + * If we failed to isolate the folio, it means that we cannot go further + * and we will return an error, so drop the reference we got from + * get_any_page() as well. + */ + folio_put(folio); + return ret; } diff --git a/mm/migrate.c b/mm/migrate.c index dbfa910ec24b..53f8429a8ebe 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -178,6 +178,33 @@ void putback_movable_pages(struct list_head *l) } } +/* Must be called with an elevated refcount on the non-hugetlb folio */ +bool isolate_folio_to_list(struct folio *folio, struct list_head *list) +{ + bool isolated = false; + + if (folio_test_hugetlb(folio)) { + isolated = isolate_hugetlb(folio, list); + } else { + bool lru = !__folio_test_movable(folio); + + if (lru) + isolated = folio_isolate_lru(folio); + else + isolated = isolate_movable_page(&folio->page, + ISOLATE_UNEVICTABLE); + + if (isolated) { + list_add(&folio->lru, list); + if (lru) + node_stat_add_folio(folio, NR_ISOLATED_ANON + + folio_is_file_lru(folio)); + } + } + + return isolated; +} + static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, struct folio *folio, unsigned long idx)