From patchwork Thu Aug 29 14:54:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13783357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A42A1C83F2E for ; Thu, 29 Aug 2024 14:55:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 398546B008C; Thu, 29 Aug 2024 10:55:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 339716B0095; Thu, 29 Aug 2024 10:55:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA5E26B0092; Thu, 29 Aug 2024 10:55:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BB71D6B008C for ; Thu, 29 Aug 2024 10:55:07 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 584AEA0604 for ; Thu, 29 Aug 2024 14:55:07 +0000 (UTC) X-FDA: 82505580654.03.986DFE6 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf11.hostedemail.com (Postfix) with ESMTP id 96F8540032 for ; Thu, 29 Aug 2024 14:55:04 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724943260; a=rsa-sha256; cv=none; b=oATA16Jgz+BPnj+InRQ6GuOh9Fyj7EeGj7AJXfzxtzMK6tpTbrWUM7Q+eU5EUAI4JeO16X H+WUbpzr7WCu2ZDrkzX4mL8jcmMNRMRAOwYGUr/ElfmHVTNIF8H/+iLNFjWC31OJiccHsZ iv6TaB/6RaTYLX1LvqVPMiBRM902xpg= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724943260; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kG/zxNSDIQVJ/o+eqSwqYHucarxybNVM9KJS4wSrq58=; b=2y7qPwcekNz4D1nnJGDWbotVS8AI/+sKlECa+B2JrQgjn3C/5amf6dZsJLYvs2ldmfuPcv ShDkFXC6XRonbQhc8LizKyhVD2Lyo5mu2azJlrGiPdIJ1RCMO4w0mdU1juiVKa47j1zxaw PoHinv+kB74CKHdPRVW5Ic+SvQP6cOU= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Wvkhw5X08z20n1n; Thu, 29 Aug 2024 22:50:08 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 6D48F1A016C; Thu, 29 Aug 2024 22:54:59 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 29 Aug 2024 22:54:58 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Matthew Wilcox , Baolin Wang , Zi Yan , Vishal Moola , , Kefeng Wang Subject: [PATCH v2 2/5] mm: migrate: add folio_isolate_movable() Date: Thu, 29 Aug 2024 22:54:53 +0800 Message-ID: <20240829145456.2591719-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240829145456.2591719-1-wangkefeng.wang@huawei.com> References: <20240829145456.2591719-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Stat-Signature: gh3wubw9nw7fbo9fbdti97zh3xp6epy6 X-Rspamd-Queue-Id: 96F8540032 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1724943304-667529 X-HE-Meta: U2FsdGVkX18yZGb+86q5TrMUOOT47VHiJY3KTrr93XG471wwJiJSMpMudYQVL6N4xkgeWavx3Yl1tkRkxlH0EffdKUkh92mcORNdYwHuO1FtvLd7yOO66W9FKykhdYyVVCCRQ8OFCf4LQfk+Ais8ca71crcYi/4nCSUZVrON4Py6MPRhydk7FVPXvQ4xeDrAqaiYJ0IMhIGa+7qwoCQZdLiB8tbrVfG1mL2DuGmZZnTjSK4Lh9GS3p6zigj20pQ33nZH5ej9+eJMECSqFQj7Mj1PBRwyLhY3tXhsHR2MYMxk42bNPtUMhsCEbBT8+sxmKx3Vo9uRSbMcz+9aqCS/QVZb5qx72eYaaPtr5Nk7lfuRNKgHBy9UWtd5jd8FOMi9VsQW1/7do5CoZiEAPjSld9nRDyqlOwxdV6iVeCIG/J2APhw1aes87rvCpPma7UXOsSTcnDl5CBIUyRW8044b2Cz27YBd8k2OYuBg5p9jvIS3iC7QfbtWWjAa3EvLQgEXwy1wHKha/A0YrBy098YVCN4MVWGaQe4jxzh4kvo8RMQy3riVjhfNSRRFnEBck9YGX5NsegOnPmAUT9gAI+b8s5x4jSgX8Kt+TUN/oTwInkH/puIFvpgEhjQsVWRQpgaBWwjpfp3h2NvumI54Wq5sQOVtMciOKWUE3mx96iRWqLaECeXA40rFmlg72z/bxznVFLCWaER29WoiSbh5IoYYDkAi6bsQcait3H3jh+2hhgXofuoYOCMbKRZb+trJPxp3a+RBkkNGyESBzHFDTrtw5/jrtjFGCMWzdbhHAVFS8D5h07oegCZJS1JlSaQ57eWYnqa3FPTwLE2up0MWDT5JnlbpvumBohs7YuVE8AhXU6l9tsVdDekJFmhT7lCZjEctq8ivTIIIuBhVct8xKUEapDCRyJ1vAQvIzwfKvkCaOwAgMEV9Nyr5mx91k9DEqdxSi41hfjfALIKThycF+bJ OwVxYUpA 3T8/24/gxP/gCmqrSiFHumtUI5DuTHKWpLxfOTErsYY/WZsnITLSTo21OJkBNG987Zs2IJMFtsWGEvKOse1yHtXeEG8fa/2UzJb21rhXrHixNbzZK9I5TYrO0QRYNYjWjA2b1N93F3yymCmRu0HYU7K2SItgycq9VEE5QFr4qzAoF4jUep531xVfzwrURgR0z5/VUjKE91vCMcbhgB0E34Q74I+/hMVdmxNQ0dZ4QEwa90q71pPXOg2/9qreBc4IOsg26Yh11JeEsXWw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Like isolate_lru_page(), make isolate_movable_page() as a wrapper around folio_isolate_movable(), since isolate_movable_page() always fails on a tail page, return immediately for a tail page in the warpper, and the wrapper will be removed once all callers are converted to folio_isolate_movable(). Note all isolate_movable_page() users increased page reference, so replace redundant folio_get_nontail_page() with folio_get() and add a reference count check into folio_isolate_movable(). Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 4 +++ mm/migrate.c | 54 +++++++++++++++++++++++------------------ 2 files changed, 34 insertions(+), 24 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 002e49b2ebd9..0a33f751596c 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode); bool isolate_folio_to_list(struct folio *folio, struct list_head *list); int migrate_huge_page_move_mapping(struct address_space *mapping, @@ -92,6 +93,9 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool folio_isolate_movable(struct folio *folio, + isolate_mode_t mode) + { return false; } static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list) { return false; } diff --git a/mm/migrate.c b/mm/migrate.c index 6f9c62c746be..704102cc3951 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -58,31 +58,30 @@ #include "internal.h" -bool isolate_movable_page(struct page *page, isolate_mode_t mode) +/** + * folio_isolate_movable() - Try to isolate a non-lru movable folio. + * @folio: Folio to isolate. + * + * Must be called with an elevated refcount on the folio. + * + * Return: true if the folio was isolated, false otherwise + */ +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode) { - struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; - /* - * Avoid burning cycles with pages that are yet under __free_pages(), - * or just got freed under us. - * - * In case we 'win' a race for a movable page being freed under us and - * raise its refcount preventing __free_pages() from doing its job - * the put_page() at the end of this block will take care of - * release this page, thus avoiding a nasty leakage. - */ - if (!folio) - goto out; + VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); + + folio_get(folio); if (unlikely(folio_test_slab(folio))) goto out_putfolio; /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ smp_rmb(); /* - * Check movable flag before taking the page lock because - * we use non-atomic bitops on newly allocated page flags so - * unconditionally grabbing the lock ruins page's owner side. + * Check movable flag before taking the folio lock because + * we use non-atomic bitops on newly allocated folio flags so + * unconditionally grabbing the lock ruins folio's owner side. */ if (unlikely(!__folio_test_movable(folio))) goto out_putfolio; @@ -92,15 +91,15 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) goto out_putfolio; /* - * As movable pages are not isolated from LRU lists, concurrent - * compaction threads can race against page migration functions - * as well as race against the releasing a page. + * As movable folios are not isolated from LRU lists, concurrent + * compaction threads can race against folio migration functions + * as well as race against the releasing a folio. * - * In order to avoid having an already isolated movable page + * In order to avoid having an already isolated movable folio * being (wrongly) re-isolated while it is under migration, - * or to avoid attempting to isolate pages being released, - * lets be sure we have the page lock - * before proceeding with the movable page isolation steps. + * or to avoid attempting to isolate folios being released, + * lets be sure we have the folio lock + * before proceeding with the movable folio isolation steps. */ if (unlikely(!folio_trylock(folio))) goto out_putfolio; @@ -125,10 +124,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) folio_unlock(folio); out_putfolio: folio_put(folio); -out: return false; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + if (PageTail(page)) + return false; + + return folio_isolate_movable((struct folio *)page, mode); +} + static void putback_movable_folio(struct folio *folio) { const struct movable_operations *mops = folio_movable_ops(folio);