From patchwork Thu Apr 25 08:40:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13642974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E932AC10F15 for ; Thu, 25 Apr 2024 08:41:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BD196B0098; Thu, 25 Apr 2024 04:41:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2486D6B0099; Thu, 25 Apr 2024 04:41:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09BBC6B009B; Thu, 25 Apr 2024 04:41:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D32DC6B0098 for ; Thu, 25 Apr 2024 04:41:00 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8B6B9A048B for ; Thu, 25 Apr 2024 08:41:00 +0000 (UTC) X-FDA: 82047409080.05.E713AF8 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf14.hostedemail.com (Postfix) with ESMTP id B4214100007 for ; Thu, 25 Apr 2024 08:40:53 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714034454; a=rsa-sha256; cv=none; b=IT3lU9CMpqyUg3sRr+Xi9Gjtp00vXcyY6V55RHVRCCZN+8+QYQB+n3IsQzqIsp0hA46FRM zVDjoHFcvtq6j/Tono5Ea7Zmvm9+ThIJn13AAHqRCWGHKm8R1nFlv8EosVdtAzYOp6il45 NExB9n7JhyHdQOeG7B+BVKo3vEBfk6w= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714034454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8ke5/Rdm3ZK+++NF4NT0K/o4iyRBxlIgylQxQBxBqVw=; b=HXBA3nRWgeWscgplK9wt5kNgs5qSQSc+0W7Dy4lnh8Wveh+OGJ88DaG5iEGmPi7dKzr7NS bU9yg46Xxp+FnL0gC5Xmjfq2z6p8/cQEuFsxw35rePYy/KR6m82qLmK1YNmAAgrQMEUhgZ e6DoshHExeXthJ5PYFKGc2B2SeilUCI= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VQ8PQ0tC2z1j0vS; Thu, 25 Apr 2024 16:37:46 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id F25AA1A016F; Thu, 25 Apr 2024 16:40:50 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 25 Apr 2024 16:40:50 +0800 From: Kefeng Wang To: Andrew Morton CC: , David Hildenbrand , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Zi Yan , Hugh Dickins , Jonathan Corbet , , Vishal Moola , Kefeng Wang Subject: [PATCH v2 05/10] mm: migrate: add folio_isolate_movable() Date: Thu, 25 Apr 2024 16:40:23 +0800 Message-ID: <20240425084028.3888403-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240425084028.3888403-1-wangkefeng.wang@huawei.com> References: <20240425084028.3888403-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B4214100007 X-Stat-Signature: kjxexakarub45eaksyfcqkq75e49buew X-Rspam-User: X-HE-Tag: 1714034453-139524 X-HE-Meta: U2FsdGVkX1/KOCHKFPMLkBnB8ygP0EH2FyThq+8Fle2GJGr2zIlnw5eupLOQNIE9wJNLges266j08Lc8C3oravErAP9YbaU+BUImaRdKejznvoETkUV+nn85kDB7LC/bGmJH54n9XCKMDU0+zCnccqf8alrTxsItQ6UqHheAzqMOr9bN0NXEx84JiEjSrgOoOOd7kzb8gjCBCIEWNdeJtQ3s8MVc6m6k2WzuseYDHqRJsEnX/8mCVN1/+Il/SL9e4Q2FdTs1O/H3eXoJVvSMusQP06IQqK2N1EmUoN9thhrjkJfcqZ0fU9iBK6R3Pd3+MGY0FuSKCkwXtcRf5IiXToTl9n5BBBs9DXwEILaYGIOg+59Va3+3udp1OUiAkvQ3eMpIW0tpSSW+VR7Tq713+eAkqJ5+Z0Cv49HoczlVN1GG6fN/t5Jhls09z3fTnkHmDlDbzNoy0w3WoFQLcPmsJpbOuFIzzd8mpSbHFV3Uzex1E0+S9YYfMmbIvt8UdhUjc7NmpbvMe+XS+4tx20OKLNKpBwWEiq/fZo0hmfrM1wgYv+MyWSpU9GkiQ3IDq2pP/8gKCHcktHNSLlmvhX+sVTOM9WK5lGyu//j8OyqSeyaOA72A061+KyA3EJr3+Fskdf0YN98dMDClHN9AYOI023TYpAQGX7fqa5MOsai7f67CqZvKJDqB2269T4inzvMe1hwKl6KDchn0p/V5r+eIVZbRhpJu+etYKiLUdvyohuF+1RQdgcDvZmQQCzb2Nj/o0NtvfK9fxj7JskiQDzBZhbnRYakkq8eCk49p0r+9XvkQvZ8V+EyWOvTHs2SFuWBFNIgKyoOHUj3CrbZtD702Yx+uFo4A3p5sLK3KOR3D76N9ve6gM+mYjPtbeM1b/LTBZKNEdrz254R4Q42pSFbCv/VDlwBW0HM4W1RkOV23UOK1JR5GnB3lnharvTLW3duQgAz5UMQX94QPUznmWcg FVP0tYoj j+BZ9mA+U+NLSgKsoczWw9vplgjW2J1GHx2ZK4wL0vn7o03MGj4UyxjSOyMF2G14siiuZT95D/oI8N/u5M62rLAdI9qhpX0YiUbd/owzRO7tH18wa5zrc7S7pTo2iW7eVgEsSHolWOXFUVr7qx5sGT7/MuFj3zDkSPDCp/I/72wHr1lRfBJETuJTlBGOIi21Gv0k+96GH44sj3QqFMrB2nS/j/p50s2JFlFo/uAyioErRKZceCFIbFg1flAyGp+wZW1q3fdYqZyJjWM8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Like isolate_lru_page(), make isolate_movable_page() as a wrapper around folio_isolate_movable(), since isolate_movable_page() always fails on a tail page, return immediately for a tail page in the warpper, and the wrapper will be removed once all callers are converted to folio_isolate_movable(). Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 4 ++++ mm/migrate.c | 41 ++++++++++++++++++++++++----------------- 2 files changed, 28 insertions(+), 17 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 2ce13e8a309b..4f1bad4379d3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -72,6 +72,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); @@ -94,6 +95,9 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool folio_isolate_movable(struct folio *folio, + isolate_mode_t mode) + { return false; } static inline int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src) diff --git a/mm/migrate.c b/mm/migrate.c index 788747dd5225..8041a6acaf01 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -57,21 +57,20 @@ #include "internal.h" -bool isolate_movable_page(struct page *page, isolate_mode_t mode) +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode) { - struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; /* - * Avoid burning cycles with pages that are yet under __free_pages(), + * Avoid burning cycles with folios that are yet under __free_pages(), * or just got freed under us. * - * In case we 'win' a race for a movable page being freed under us and + * In case we 'win' a race for a movable folio being freed under us and * raise its refcount preventing __free_pages() from doing its job - * the put_page() at the end of this block will take care of - * release this page, thus avoiding a nasty leakage. + * the folio_put() at the end of this block will take care of + * release this folio, thus avoiding a nasty leakage. */ - if (!folio) + if (!folio_try_get(folio)) goto out; if (unlikely(folio_test_slab(folio))) @@ -79,9 +78,9 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ smp_rmb(); /* - * Check movable flag before taking the page lock because - * we use non-atomic bitops on newly allocated page flags so - * unconditionally grabbing the lock ruins page's owner side. + * Check movable flag before taking the folio lock because + * we use non-atomic bitops on newly allocated folio flags so + * unconditionally grabbing the lock ruins folio's owner side. */ if (unlikely(!__folio_test_movable(folio))) goto out_putfolio; @@ -91,15 +90,15 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) goto out_putfolio; /* - * As movable pages are not isolated from LRU lists, concurrent - * compaction threads can race against page migration functions - * as well as race against the releasing a page. + * As movable folios are not isolated from LRU lists, concurrent + * compaction threads can race against folio migration functions + * as well as race against the releasing a folio. * - * In order to avoid having an already isolated movable page + * In order to avoid having an already isolated movable folio * being (wrongly) re-isolated while it is under migration, - * or to avoid attempting to isolate pages being released, - * lets be sure we have the page lock - * before proceeding with the movable page isolation steps. + * or to avoid attempting to isolate folios being released, + * lets be sure we have the folio lock + * before proceeding with the movable folio isolation steps. */ if (unlikely(!folio_trylock(folio))) goto out_putfolio; @@ -128,6 +127,14 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) return false; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + if (PageTail(page)) + return false; + + return folio_isolate_movable((struct folio *)page, mode); +} + static void putback_movable_folio(struct folio *folio) { const struct movable_operations *mops = folio_movable_ops(folio);