From patchwork Wed Mar 27 14:10:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13606713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F9CFC47DD9 for ; Wed, 27 Mar 2024 14:13:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AB866B0087; Wed, 27 Mar 2024 10:13:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BE3C6B0092; Wed, 27 Mar 2024 10:13:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8F3A6B008A; Wed, 27 Mar 2024 10:13:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B3ECB6B0088 for ; Wed, 27 Mar 2024 10:13:21 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1E64E80E26 for ; Wed, 27 Mar 2024 14:13:21 +0000 (UTC) X-FDA: 81943011402.19.029CCF3 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf26.hostedemail.com (Postfix) with ESMTP id 337E7140016 for ; Wed, 27 Mar 2024 14:13:17 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711548799; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AE/HoRl7DV7kTAss42eXnXE/ImpbK0jauu5kNON0ras=; b=n7UlzZn2QfodZaWsudwLyb5XRLywbT0jLJlynuLKfLO9kaBrVBx/hKkukwe3vIUIMyI2+r h1HJwXYE6ewnIafDyD4Cj+FBQwzoFaT0202N+f3Q6qE+TEHZlJR+Ct7cFJnPFCy7WoBBw3 0W+bBPTjZcy3AgmHyZ9fcNUXdqKRl44= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711548799; a=rsa-sha256; cv=none; b=ZsYGutOJp4tqhruddsiABnLs5vcxQNbWiXoqSHnOPanl9AnfG6VhzpZBHw15bDYFAMDlGw 7QjhsgaJYgGTsS8sspD1RkNmgSJHjvYjsYfx0KftlVWiwzKBIMKjeS87WpDoDUXQv0xF5C yN1KOp4+fXPl/uuH8HYOf8Acx7U2qtU= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4V4TBx4Zvzz1wnZy; Wed, 27 Mar 2024 22:12:25 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 08C27140259; Wed, 27 Mar 2024 22:13:15 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 27 Mar 2024 22:13:14 +0800 From: Kefeng Wang To: Andrew Morton CC: , Miaohe Lin , Naoya Horiguchi , David Hildenbrand , Oscar Salvador , Zi Yan , Hugh Dickins , Jonathan Corbet , , , Baolin Wang , Kefeng Wang Subject: [PATCH 1/6] mm: migrate: add isolate_movable_folio() Date: Wed, 27 Mar 2024 22:10:29 +0800 Message-ID: <20240327141034.3712697-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240327141034.3712697-1-wangkefeng.wang@huawei.com> References: <20240327141034.3712697-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 337E7140016 X-Rspam-User: X-Stat-Signature: 7c4kwshu1f4mudyjbkpqrzx16rpb38ri X-Rspamd-Server: rspam03 X-HE-Tag: 1711548797-265155 X-HE-Meta: U2FsdGVkX1+foVAKSsE7aJrEJNxdhnyUPrKZwoRcvHGUX8y/Sof/6W8nF6mV0cm2LniM3k1YiZIr71sKb8iNS0v4UYRC0usiY0p7HEJt9f/ZJmI8D5Di6bWfPVRPM10LqNqJL6AbuHPATZDgyQSiYIUbQ0QAa0aQAlnd5AzYmeA2vzlWU1PazgTrIDDLmdsg+9eFEWVJiemuODuN08qAi1+jne+N+fJWl453RZiMsG0LXVka3/F/FlMFew6IUEoNtI+C7Xa5A/6UexksAgjdMedVpMEbiVXQo2Fd0M8rRjOaEp4eklc9OF8R0v4EYmJX2aC+zagcJf5dI1GhDPq/KSQFy3DdypY8yxd0931ZJFvIohMoJB+rDWUd0SqKAgBUy/hSEhM96xNqmrMM1+gt3vxoxAC3e6+NhqzJK+PNG7u5ZlEPxpcQp6SBDANEacLBpKWNZhp63SUBCk8x/sSmjceTab4zhe+Y9K3HEKiD1zcDz3DgRmRueSoCt2v5pIWBNHv3QVPiH2lV1751uifLyVUyijMsBBKIMGxL29/DtceDFu8uRylILhcDGvMscYALhFLBHPck08JnW3MfOBBtONwh6yB//sJOfwC6ObYZ7RaRqCOkNJ3Lyq/ibO3qHwUj1svWEnORRt3WHPuBLtBHB5X/F6Vfu3ib7yccQm2fHlbYhDlPmrjD8zj1KZA3NtOVsIvBxCegrMqQHseQRmUNyyjWO7CswJ3KpkhMzxQiTk6NkOD8fhdpQ/EQpLuHZlNlrAJOqzx4Kqh65yrNckLSwNjm1KCdps1ChxbV0IdfQMUOi943sSFOLrxRX2pzq95bs0vw/nWv+LC1oR3JinERcN6XwqEA9VWfJKbFtTIn69Aldg28Tazk+4LKTPF1CbbrxyqEKufDLZU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Like isolate_lru_page(), make isolate_movable_page() as a wrapper around isolate_lru_folio(), since isolate_movable_page() always fails on a tail page, add a warn for tail page and return immediately. Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 3 +++ mm/migrate.c | 41 +++++++++++++++++++++++------------------ 2 files changed, 26 insertions(+), 18 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f9d92482d117..a6c38ee7246a 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode) + { return false; } static inline int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src) diff --git a/mm/migrate.c b/mm/migrate.c index 2228ca681afb..b2195b6ff32c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -57,31 +57,29 @@ #include "internal.h" -bool isolate_movable_page(struct page *page, isolate_mode_t mode) +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode) { - struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; /* - * Avoid burning cycles with pages that are yet under __free_pages(), + * Avoid burning cycles with folios that are yet under __free_pages(), * or just got freed under us. * - * In case we 'win' a race for a movable page being freed under us and + * In case we 'win' a race for a movable folio being freed under us and * raise its refcount preventing __free_pages() from doing its job - * the put_page() at the end of this block will take care of - * release this page, thus avoiding a nasty leakage. + * the folio_put() at the end of this block will take care of + * release this folio, thus avoiding a nasty leakage. */ - if (!folio) - goto out; + folio_get(folio); if (unlikely(folio_test_slab(folio))) goto out_putfolio; /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ smp_rmb(); /* - * Check movable flag before taking the page lock because - * we use non-atomic bitops on newly allocated page flags so - * unconditionally grabbing the lock ruins page's owner side. + * Check movable flag before taking the folio lock because + * we use non-atomic bitops on newly allocated folio flags so + * unconditionally grabbing the lock ruins folio's owner side. */ if (unlikely(!__folio_test_movable(folio))) goto out_putfolio; @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) goto out_putfolio; /* - * As movable pages are not isolated from LRU lists, concurrent - * compaction threads can race against page migration functions - * as well as race against the releasing a page. + * As movable folios are not isolated from LRU lists, concurrent + * compaction threads can race against folio migration functions + * as well as race against the releasing a folio. * - * In order to avoid having an already isolated movable page + * In order to avoid having an already isolated movable folio * being (wrongly) re-isolated while it is under migration, - * or to avoid attempting to isolate pages being released, + * or to avoid attempting to isolate folios being released, * lets be sure we have the page lock * before proceeding with the movable page isolation steps. */ @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) if (!mops->isolate_page(&folio->page, mode)) goto out_no_isolated; - /* Driver shouldn't use PG_isolated bit of page->flags */ + /* Driver shouldn't use PG_isolated bit of folio->flags */ WARN_ON_ONCE(folio_test_isolated(folio)); folio_set_isolated(folio); folio_unlock(folio); @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) folio_unlock(folio); out_putfolio: folio_put(folio); -out: return false; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) + return false; + + return isolate_movable_folio((struct folio *)page, mode); +} + static void putback_movable_folio(struct folio *folio) { const struct movable_operations *mops = folio_movable_ops(folio);