From patchwork Wed Feb 15 10:39:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13141507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5781EC6379F for ; Wed, 15 Feb 2023 10:39:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94A026B0073; Wed, 15 Feb 2023 05:39:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F73D6B0074; Wed, 15 Feb 2023 05:39:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BF886B0075; Wed, 15 Feb 2023 05:39:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6EC036B0073 for ; Wed, 15 Feb 2023 05:39:53 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E8F87C14E4 for ; Wed, 15 Feb 2023 10:39:52 +0000 (UTC) X-FDA: 80469180624.16.6617264 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by imf08.hostedemail.com (Postfix) with ESMTP id 903E1160003 for ; Wed, 15 Feb 2023 10:39:50 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676457591; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6mQCLPmyPTuTiyp8vHoPb0USCZMfC62NkilSxB9NCMo=; b=1lEpB3LLkA/PWoYUfQgG0SW4wpS1Gv4z0+sCebA2Ejb4PavHnsaWPTQ7A7lQVG6yl1DET8 PtyBlMxUkYQDtntzT5Tkx5ZSeZCbrNYi63N6gKvWQlWKBDACwNkVX8UhMWiS2AEq8hzrjs hkY2/Wz3MX9ORNdMeuIv7Xx74DzGrFg= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676457591; a=rsa-sha256; cv=none; b=BHDBoJqGuBjo4md3pQ+BIBGo8Ufaoz1ANtGP6j+OiQTV6zW47+ChGM/S1baqRUmHJRhrws O5kfylSAM8o0enkL5Jawg5IlhSsT+9Ug42u2Otv9fRw7GBANFCti1n6Qu9uh1W6RD5QFUH qgpzKLNaGl662uyyRfLc3KvrrpTNeK4= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R241e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0Vbk1bhe_1676457585; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0Vbk1bhe_1676457585) by smtp.aliyun-inc.com; Wed, 15 Feb 2023 18:39:46 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, sj@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, naoya.horiguchi@nec.com, linmiaohe@huawei.com, david@redhat.com, osalvador@suse.de, mike.kravetz@oracle.com, willy@infradead.org, baolin.wang@linux.alibaba.com, damon@lists.linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/4] mm: change to return bool for folio_isolate_lru() Date: Wed, 15 Feb 2023 18:39:34 +0800 Message-Id: <8a4e3679ed4196168efadf7ea36c038f2f7d5aa9.1676424378.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 903E1160003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: gewcb7kdzgdtrur94115d54n8gwabgeu X-HE-Tag: 1676457590-934195 X-HE-Meta: U2FsdGVkX19f12x+eCq6a6QL9pXZIDdkzjAlpN1ktLgqTH1xleIBEzTBYYV4oncMB6ZuuJqOhSmxHC2BJiYLPMVuf6sskmsd7iD69V47rUMKMUGT2TYuNfXudO7teu2Yhl8YE4He8FPhE/zHpmtvxuVeFDLeo7ixJi/foWnrCO7xiLu84IyJdCTs6qUnl2VBXGVcaT0s4bGNyTB8CG/3B3ISDwGby7xP6YVWg4AI6NY5DkNh8x1BvucQfao5s54YL6PXzbLo4IPNM2ipaUi3ggZl4cRnVM9Rkk9XGhIPGyIfWdOt/ChmTb7evLgbbX/wgrz36VXRKZy0TchV7L2JiEXUEsnV8czWZip1DnPEBIa12JuOgbu7pLIkVWZ34OQVc9D5qnlGXbR9LAoel4Q7wpCF9yaNYG7yiY+juZGTjaL2pj8lCg/4vzNzpWKo5eCMpVVLYmOJ9t/kvyQH+bn1sdeVLxwsJcPLF07U3pXTJy+n9+TGF1siSCBzhOWz3sERVwgYT5V44HmTT8TVeNzllnisuQiTDXSiVnAGH7V/98Lgw/V189S9SW7vgOJsuBbqzhIUsakup3LZyDOND9OZ2okFo3dMRCHWxxVNFtvJMBFXVWJ24cL1cqpr7DoQCH1xRVQVpKzJVj7e4x5xibb8KO0H772SpxobKfqulwVX6qYpIds4jvChgMxg5MyJucg0y8Ufr3vkqadHH6hQAsxO3Bg4py7yn1A9sW7K1KqmlYP1fNhX4gvEtWvk3M3xPwkP0GcmVUg30bGg6DynNZSHOIa3Sl1mkThAKIOxZ0mMU/LF8SCp9kTwmw17yiw91M73QVRXems+zqbtkdRBefHnRrmkQir1RZ2ttilMltQTTAagTgw3sGJULbZUz8aipFHZxSbmBdijNWbaBOlNuOblKypTItnek9QVKRCVCQKVhhjQdEqCtw3NpyF3o97Lag2tLcKf/GbEuugDtOmkszF Dn9At/lm tsuIf0MvNHKwJ5mqp5/18TqbDYCXbmIqcvTvKZWOQXAPmoWVwGUH3XZuQoJ+PYZhm94u3brRF1AzNQPLvHDGrLz5iATPsqTV7HuQ7Zqql8avvAHojEM2xFVN1ehxZg2PTdLfCW8OFvYIBLywMinhIN4teCZ70Nc5V8T8npp/wjxf5qi/X4TF96d/zDCApNSULWW6jB85hmCrxxTfQIdA41m3ezctQve1bZscOwSsxBxFCyJKSMtPHqQKXjXujNLr/WeP6n1JHT5ri+FUGdfGEDrsbB2MQWonpb+1J8JpPdfr1o8vH87EOUu4lL+scJJyG/A1yd1XySWxvh7faMp6Mce6mqrqgchV3wu6x4eML+pKbA2CrI0Wni0fE5defFA7HuGMP1ssLFexCVzA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the folio_isolate_lru() did not return a boolean value to indicate isolation success or not, however below code checking the return value can make people think that it was a boolean success/failure thing, which makes people easy to make mistakes (see the fix patch[1]). if (folio_isolate_lru(folio)) continue; Thus it's better to check the negative error value expilictly returned by folio_isolate_lru(), which makes code more clear per Linus's suggestion[2]. Moreover Matthew suggested we can convert the isolation functions to return a boolean[3], since most users did not care about the negative error value, and can also remove the confusing of checking return value. So this patch converts the folio_isolate_lru() to return a boolean value, which means return 'true' to indicate the folio isolation is successful, and 'false' means a failure to isolation. Meanwhile changing all users' logic of checking the isolation state. No functional changes intended. [1] https://lore.kernel.org/all/20230131063206.28820-1-Kuan-Ying.Lee@mediatek.com/T/#u [2] https://lore.kernel.org/all/CAHk-=wiBrY+O-4=2mrbVyxR+hOqfdJ=Do6xoucfJ9_5az01L4Q@mail.gmail.com/ [3] https://lore.kernel.org/all/Y+sTFqwMNAjDvxw3@casper.infradead.org/ Signed-off-by: Baolin Wang Reviewed-by: SeongJae Park Acked-by: David Hildenbrand Reviewed-by: Matthew Wilcox (Oracle) --- mm/damon/paddr.c | 2 +- mm/folio-compat.c | 8 +++++++- mm/gup.c | 2 +- mm/internal.h | 2 +- mm/khugepaged.c | 2 +- mm/madvise.c | 4 ++-- mm/mempolicy.c | 2 +- mm/vmscan.c | 10 +++++----- 8 files changed, 19 insertions(+), 13 deletions(-) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index b4df9b9bcc0a..607bb69e526c 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -246,7 +246,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) folio_clear_referenced(folio); folio_test_clear_young(folio); - if (folio_isolate_lru(folio)) { + if (!folio_isolate_lru(folio)) { folio_put(folio); continue; } diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 18c48b557926..540373cf904e 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -115,9 +115,15 @@ EXPORT_SYMBOL(grab_cache_page_write_begin); int isolate_lru_page(struct page *page) { + bool ret; + if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) return -EBUSY; - return folio_isolate_lru((struct folio *)page); + ret = folio_isolate_lru((struct folio *)page); + if (ret) + return 0; + + return -EBUSY; } void putback_lru_page(struct page *page) diff --git a/mm/gup.c b/mm/gup.c index b0885f70579c..eab18ba045db 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1939,7 +1939,7 @@ static unsigned long collect_longterm_unpinnable_pages( drain_allow = false; } - if (folio_isolate_lru(folio)) + if (!folio_isolate_lru(folio)) continue; list_add_tail(&folio->lru, movable_page_list); diff --git a/mm/internal.h b/mm/internal.h index dfb37e94e140..8645e8496537 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -188,7 +188,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, * in mm/vmscan.c: */ int isolate_lru_page(struct page *page); -int folio_isolate_lru(struct folio *folio); +bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a5d32231bfad..cee659cfa3c1 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2047,7 +2047,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, goto out_unlock; } - if (folio_isolate_lru(folio)) { + if (!folio_isolate_lru(folio)) { result = SCAN_DEL_PAGE_LRU; goto out_unlock; } diff --git a/mm/madvise.c b/mm/madvise.c index 5a5a687d03c2..c2202f51e9dd 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -406,7 +406,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, folio_clear_referenced(folio); folio_test_clear_young(folio); if (pageout) { - if (!folio_isolate_lru(folio)) { + if (folio_isolate_lru(folio)) { if (folio_test_unevictable(folio)) folio_putback_lru(folio); else @@ -500,7 +500,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, folio_clear_referenced(folio); folio_test_clear_young(folio); if (pageout) { - if (!folio_isolate_lru(folio)) { + if (folio_isolate_lru(folio)) { if (folio_test_unevictable(folio)) folio_putback_lru(folio); else diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0919c7a719d4..2751bc3310fd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1033,7 +1033,7 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist, * expensive, so check the estimated mapcount of the folio instead. */ if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) { - if (!folio_isolate_lru(folio)) { + if (folio_isolate_lru(folio)) { list_add_tail(&folio->lru, foliolist); node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), diff --git a/mm/vmscan.c b/mm/vmscan.c index 34535bbd4fe9..7658b40df947 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2337,12 +2337,12 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, * (2) The lru_lock must not be held. * (3) Interrupts must be enabled. * - * Return: 0 if the folio was removed from an LRU list. - * -EBUSY if the folio was not on an LRU list. + * Return: true if the folio was removed from an LRU list. + * false if the folio was not on an LRU list. */ -int folio_isolate_lru(struct folio *folio) +bool folio_isolate_lru(struct folio *folio) { - int ret = -EBUSY; + bool ret = false; VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); @@ -2353,7 +2353,7 @@ int folio_isolate_lru(struct folio *folio) lruvec = folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); unlock_page_lruvec_irq(lruvec); - ret = 0; + ret = true; } return ret; From patchwork Wed Feb 15 10:39:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13141509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A60FEC636CC for ; Wed, 15 Feb 2023 10:39:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F3C26B0075; Wed, 15 Feb 2023 05:39:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3534B6B0078; Wed, 15 Feb 2023 05:39:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10B3F6B007B; Wed, 15 Feb 2023 05:39:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D738E6B0078 for ; Wed, 15 Feb 2023 05:39:55 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B381CABA2C for ; Wed, 15 Feb 2023 10:39:55 +0000 (UTC) X-FDA: 80469180750.03.69AFA8A Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf13.hostedemail.com (Postfix) with ESMTP id 86F5920018 for ; Wed, 15 Feb 2023 10:39:53 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676457594; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UuEkfgU855C/DdRqzQTEbnvtZlNjZawljegauwTd+7Y=; b=uJM74AxIWGmIy5YJD2DfVR9wo1MvpbLWSs5nzrQFQE6N9gptfDFuGFXDF9zDsd9QruwMZq LUpr2sZwjyUjQPaDe53vk/1N5cogRWVNReqTXQL+IayDo98PNbLVW8Im7GcMB+6zK8lf8c 0U9+sGO7YB0CJXIsFNKIqAcogi4kt30= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676457594; a=rsa-sha256; cv=none; b=fafOLW+zSax1B/lvsctZg/Rh+6DMocFm2KwBd++YcWT/D28hJ+OQt6/ttfMFLbd41uNX1s swWxz6s6R6yQGsLp0v2ECmX2CtiZkzubABciy5GXKfNH+kboHjyNLvfbwhtkypaVrRS0Ax WzPvdSmxeShCk7WjQwPdVh5Wqmh97kg= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0VbkAEsj_1676457586; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VbkAEsj_1676457586) by smtp.aliyun-inc.com; Wed, 15 Feb 2023 18:39:47 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, sj@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, naoya.horiguchi@nec.com, linmiaohe@huawei.com, david@redhat.com, osalvador@suse.de, mike.kravetz@oracle.com, willy@infradead.org, baolin.wang@linux.alibaba.com, damon@lists.linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 2/4] mm: change to return bool for isolate_lru_page() Date: Wed, 15 Feb 2023 18:39:35 +0800 Message-Id: <3074c1ab628d9dbf139b33f248a8bc253a3f95f0.1676424378.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: essj774pewwab7bu8w9buf3s1dhs59pe X-Rspam-User: X-Rspamd-Queue-Id: 86F5920018 X-Rspamd-Server: rspam06 X-HE-Tag: 1676457593-565008 X-HE-Meta: U2FsdGVkX18+d7aVoj+k6MDrvbtGNH29Xvj8YxFw71DU8TUQZJ1qAVCgTALxp8e1fvpJsQXvNed3dMi97SKiWlpdXSUiBT66oTKdwmIkgkVI3Oiz3T30Loe1Iy+YE+/c8SU9GG6WEPnrOT9B+Tl2MjMtwjk4R2Hql6oc+fGsW7OcCHxSptMmG5DzPM1waCRZc12GZu/sOwaZ1ixewTXbuG2C94c3JYuehyyf7XFYD0E+eWSvqbwFBqPqKGNvMeAmXxaqAfxO3UAsRb6jROowG0hkBOSpoqS8u+66rx3RvCWqL/HP9Jo7z0RhPOhB5kZhmHU0vpK2STPBO7kTsjs+uCOgAHfXAtWP2WYkUkWY+8Rwc/DS0OOs6z0irqYo+tzgYUxMaGO/+XvopZNPC82YDm5ZuB/D2Pe7atQHZ1NWPQUT6Cx/nRTOTTYBOpAY8Y3RuqpguQ5Izyuy0f03nir36GaaTQ8QXduiOYlNYm/yaUTYkhClUNbBWQHSJmD8ZKnc3HnYyPzbTN0kwUaZywvnZKZFnhMV0R16Le/FbTDLi+QLXZ/+rGtftAdRD6VVCZBY29BH2O0MvMQeKn3g+rBZ911kDKim/AsXSIyYGvkRpLo+LQ1Oi5JhKa2itVrlAY1z53pZ3EAKlsgOVZvu+Wzaq4/zZO6D/tngA80GIee5HpmFJoV2trPgdkGIz7zP2uzGrm+5dtgmmd1NW8D8o1YdN1cTddR4BDZGmOki4PnnxI/Hu6Tm16IKy1lp3Svx9n9pQOM6v5Hm+5A3hGysnx+2IpbdfVNSHudOwi0eczJp3bAgrnYdQ7ScgArZhXJvnZapk0tLOOQNmzRpTI+eHa3OzRhGILIcM11+mbe+VdMx4Uu17edNY//ro+jKogRkdTGz01kzKhpCrKhXysHqaNm+STner+A5ZjpqYsGRhCNwhmgPsesz+1ldxW/Rtn+w7AMLwCElVjO6QJpbR1gGv5k kZlRH3Vr zRoXHyclyPRBCH8nlB6ZzuxnzTSZPR9WwQaD17rgGnLckBlRvhTixfjZZ3DswHfLic/aV6d2WNoi6bgZYwaCihk82uxk1YzZp/1lY8Kz/CHwVlpgpcLfoR3WZc6aT7J2ZbRFig5Ul/tSqMkqGOazNlbUFHH4WMZDvgjsi1yNLVUSmaTsJDt1dqAArG111X+e+CMnh3R+XmnT6LNKej/oo9tKLXxy3PUoD+OBU7LGq0kYNdUKHscadpbn2jzOn0xSvcrZ1uQ78WMr6Kh98jA/mbAnOPw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The isolate_lru_page() can only return 0 or -EBUSY, and most users did not care about the negative error of isolate_lru_page(), except one user in add_page_for_migration(). So we can convert the isolate_lru_page() to return a boolean value, which can help to make the code more clear when checking the return value of isolate_lru_page(). Also convert all users' logic of checking the isolation state. No functional changes intended. Signed-off-by: Baolin Wang Acked-by: David Hildenbrand Reviewed-by: Matthew Wilcox (Oracle) --- mm/folio-compat.c | 12 +++--------- mm/internal.h | 2 +- mm/khugepaged.c | 2 +- mm/memcontrol.c | 4 ++-- mm/memory-failure.c | 4 ++-- mm/memory_hotplug.c | 8 +++++--- mm/migrate.c | 9 ++++++--- mm/migrate_device.c | 2 +- 8 files changed, 21 insertions(+), 22 deletions(-) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 540373cf904e..cabcd1de9ecb 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -113,17 +113,11 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping, } EXPORT_SYMBOL(grab_cache_page_write_begin); -int isolate_lru_page(struct page *page) +bool isolate_lru_page(struct page *page) { - bool ret; - if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) - return -EBUSY; - ret = folio_isolate_lru((struct folio *)page); - if (ret) - return 0; - - return -EBUSY; + return false; + return folio_isolate_lru((struct folio *)page); } void putback_lru_page(struct page *page) diff --git a/mm/internal.h b/mm/internal.h index 8645e8496537..fc01fd092ea5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -187,7 +187,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, /* * in mm/vmscan.c: */ -int isolate_lru_page(struct page *page); +bool isolate_lru_page(struct page *page); bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cee659cfa3c1..8dbc39896811 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -659,7 +659,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, * Isolate the page to avoid collapsing an hugepage * currently in use by the VM. */ - if (isolate_lru_page(page)) { + if (!isolate_lru_page(page)) { unlock_page(page); result = SCAN_DEL_PAGE_LRU; goto out; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 17335459d8dc..e8fd42be5fab 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6176,7 +6176,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, target_type = get_mctgt_type_thp(vma, addr, *pmd, &target); if (target_type == MC_TARGET_PAGE) { page = target.page; - if (!isolate_lru_page(page)) { + if (isolate_lru_page(page)) { if (!mem_cgroup_move_account(page, true, mc.from, mc.to)) { mc.precharge -= HPAGE_PMD_NR; @@ -6226,7 +6226,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, */ if (PageTransCompound(page)) goto put; - if (!device && isolate_lru_page(page)) + if (!device && !isolate_lru_page(page)) goto put; if (!mem_cgroup_move_account(page, false, mc.from, mc.to)) { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index db85c2d37f70..e504362fdb23 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -846,7 +846,7 @@ static const char * const action_page_types[] = { */ static int delete_from_lru_cache(struct page *p) { - if (!isolate_lru_page(p)) { + if (isolate_lru_page(p)) { /* * Clear sensible page flags, so that the buddy system won't * complain when the page is unpoison-and-freed. @@ -2513,7 +2513,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist) bool lru = !__PageMovable(page); if (lru) - isolated = !isolate_lru_page(page); + isolated = isolate_lru_page(page); else isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a1e8c3e9ab08..5fc2dcf4e3ab 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1632,6 +1632,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) for (pfn = start_pfn; pfn < end_pfn; pfn++) { struct folio *folio; + bool isolated; if (!pfn_valid(pfn)) continue; @@ -1667,9 +1668,10 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) * We can skip free pages. And we can deal with pages on * LRU and non-lru movable pages. */ - if (PageLRU(page)) - ret = isolate_lru_page(page); - else + if (PageLRU(page)) { + isolated = isolate_lru_page(page); + ret = isolated ? 0 : -EBUSY; + } else ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE); if (!ret) { /* Success */ list_add_tail(&page->lru, &source); diff --git a/mm/migrate.c b/mm/migrate.c index ef68a1aff35c..53010a142e7f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2132,11 +2132,14 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, } } else { struct page *head; + bool isolated; head = compound_head(page); - err = isolate_lru_page(head); - if (err) + isolated = isolate_lru_page(head); + if (!isolated) { + err = -EBUSY; goto out_putpage; + } err = 1; list_add_tail(&head->lru, pagelist); @@ -2541,7 +2544,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) return 0; } - if (isolate_lru_page(page)) + if (!isolate_lru_page(page)) return 0; mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 6c3740318a98..d30c9de60b0d 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -388,7 +388,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns, allow_drain = false; } - if (isolate_lru_page(page)) { + if (!isolate_lru_page(page)) { src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; restore++; continue; From patchwork Wed Feb 15 10:39:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13141510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D117C64EC7 for ; Wed, 15 Feb 2023 10:39:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC6A36B0078; Wed, 15 Feb 2023 05:39:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E77446B007B; Wed, 15 Feb 2023 05:39:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA1B26B007D; Wed, 15 Feb 2023 05:39:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BC5686B0078 for ; Wed, 15 Feb 2023 05:39:56 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8B6E2C0542 for ; Wed, 15 Feb 2023 10:39:56 +0000 (UTC) X-FDA: 80469180792.22.B6BA0CB Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by imf29.hostedemail.com (Postfix) with ESMTP id BBBEE120012 for ; Wed, 15 Feb 2023 10:39:52 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676457593; a=rsa-sha256; cv=none; b=MejWmwcEE0nnEdR40/qGS22Onzezt382E39EGCEbp80Qc+RLvh5Q8dJxmiQXlcIF+2cIKX xbjCUhh2MTh2Yx/gTnNM97O4ssWfYgLe8i8jl+Ofq3t53+WQTBMQiOr+1EOKBrW6LHpuw0 xewvjFgpX9ehcKabPUBmma+cieUkktA= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676457593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H1sG5taYyZ/A/W6aoOcRboqx/jTVS57pdG5LJ508OIE=; b=1zMmFKcRjmu08h0II9JtMpfspf2N9jDjeXEo5oGhUb16bomkn/7S8DxuXQCg1baUFOrRGc /e2D5lvQw8HqUGUy3pbStKn0xbfDWLPC2OSsS2WDkz6dc1bhUzv/NpnGTZ+z7MItbattoT 6LWxN0uEVWpjP+RH24R1zXwx3qR+uq4= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0Vbk1biH_1676457587; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0Vbk1biH_1676457587) by smtp.aliyun-inc.com; Wed, 15 Feb 2023 18:39:48 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, sj@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, naoya.horiguchi@nec.com, linmiaohe@huawei.com, david@redhat.com, osalvador@suse.de, mike.kravetz@oracle.com, willy@infradead.org, baolin.wang@linux.alibaba.com, damon@lists.linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 3/4] mm: hugetlb: change to return bool for isolate_hugetlb() Date: Wed, 15 Feb 2023 18:39:36 +0800 Message-Id: <12a287c5bebc13df304387087bbecc6421510849.1676424378.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: BBBEE120012 X-Rspamd-Server: rspam01 X-Stat-Signature: m9hcw9ajo9c5nshsxdq4bcekryaigunf X-HE-Tag: 1676457592-140529 X-HE-Meta: U2FsdGVkX19SUOm0gTOVvGrd7ne9cLq/IJ5gjF9CkfTswsMfOXpL+p4YNxq+d6ydKS/kxHA2E8lx3oKyNDs6w11+41+9lZsJyGLMEySeOsfCAk0+NPJxNYVQ7ajaC/Ibp2BZDi6gFRGiytmalN70fBfyOoTWbrb0tToMIZkA2Yo/wLrxL9ES1KYlHIrj8YBBivXRmfet8eJD6FgOOMDFSz5YVZE/UM+kAL/CgxebZpbAZ3Jk1R3z816PT8uOwxT5kPEBM9AtO1Td6tKBaopZiD+l4WQwjVQNr3ZQ7QSTlxRxwtzcB6qDrWHtRNm0taUDdV9ehSnIQq1RejVFoIt8jdmn8fP2KoHPGDp0yedAt4yTMrVcxLulfXhHW4L/SHFZCnHQ40v1DezGA1+t9K9YkSpTc3OA292MLf5h6RaVdTkgEPIPjObovvfEoF+8IHp5Iyh0SGtXPJ4tu5PHFuQFj1Sp5mpwFxJTwaB496QymRRVJprVS7KOVCYi6caIWXOa6U5PHXgS6QKR2FznkmmgzcvYD/1r4BsRteUI8Pe8vRagDVuUaAHfsdOJoFPI7y0M0CahOESjLwVPKuXzwx6n1mJls4a2RNJM6eEGz8L4s+fbIAX+lDnwO23D5C7vXzW6l6o9Dsw+INHkbUs1tdOFj/xArkR7HnH//m0nKSSpnyIx0ptC159dIK29EvnpP/B0b348hPxVjCbMWrkx235ZAIYRByrns815MuNuLVtJFMK9A7yOjoEiQOhLYAXeNF9GkXbZJW2DECmHCj8G83vZqV445sojkBuh6CF5E9ppzTIPLc8gTrv55elKRhqUM0rEmcaC/8tDsvioHhZXQM9tCu1krNNH4/Ws9JKEx2v0G72G/YPl81lZdkJzqb7Z/vu1LvWO0EfYvFiG+mFCF45hnlkq8eMd3s6PJtqbXXQhDXRLdFtpIhhcV0uvr0admfQwtCLJRWtCzyHgWrRNNdw dOxV0hit o4GkmrMYcsmZRNtDxCtlkE817qM8ff+Iv1W5rdUMa1o36KRnDILdVYiwZInoODf3xXu3eJrR/jMlC0u8gjkIR7dsKoBdi3fR7xGFl6Hym2T0k3n+6TMhVXoKS/vSapbHFFNqvAoi1G5qajKs4yhobuj3AR2hERuwI1oyTKiq4dGq1YZeZAALrfP0HJMgEdKklnwMF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not care about the negative value, thus we can convert the isolate_hugetlb() to return a boolean value to make code more clear when checking the hugetlb isolation state. Moreover converts 2 users which will consider the negative value returned by isolate_hugetlb(). No functional changes intended. Signed-off-by: Baolin Wang Acked-by: David Hildenbrand Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 6 +++--- mm/hugetlb.c | 13 ++++++++----- mm/memory-failure.c | 2 +- mm/mempolicy.c | 2 +- mm/migrate.c | 7 +++---- 5 files changed, 16 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index df6dd624ccfe..5f5e4177b2e0 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -171,7 +171,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, vm_flags_t vm_flags); long hugetlb_unreserve_pages(struct inode *inode, long start, long end, long freed); -int isolate_hugetlb(struct folio *folio, struct list_head *list); +bool isolate_hugetlb(struct folio *folio, struct list_head *list); int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison); int get_huge_page_for_hwpoison(unsigned long pfn, int flags, bool *migratable_cleared); @@ -413,9 +413,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, return NULL; } -static inline int isolate_hugetlb(struct folio *folio, struct list_head *list) +static inline bool isolate_hugetlb(struct folio *folio, struct list_head *list) { - return -EBUSY; + return false; } static inline int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3a01a9dbf445..16513cd23d5d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2925,13 +2925,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, */ goto free_new; } else if (folio_ref_count(old_folio)) { + bool isolated; + /* * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_folio, list); + isolated = isolate_hugetlb(old_folio, list); spin_lock_irq(&hugetlb_lock); + ret = isolated ? 0 : -EBUSY; goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { /* @@ -3005,7 +3008,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (hstate_is_gigantic(h)) return -ENOMEM; - if (folio_ref_count(folio) && !isolate_hugetlb(folio, list)) + if (folio_ref_count(folio) && isolate_hugetlb(folio, list)) ret = 0; else if (!folio_ref_count(folio)) ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); @@ -7251,15 +7254,15 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h) * These functions are overwritable if your architecture needs its own * behavior. */ -int isolate_hugetlb(struct folio *folio, struct list_head *list) +bool isolate_hugetlb(struct folio *folio, struct list_head *list) { - int ret = 0; + bool ret = true; spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(folio) || !folio_test_hugetlb_migratable(folio) || !folio_try_get(folio)) { - ret = -EBUSY; + ret = false; goto unlock; } folio_clear_hugetlb_migratable(folio); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index e504362fdb23..8604753bc644 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2508,7 +2508,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist) bool isolated = false; if (PageHuge(page)) { - isolated = !isolate_hugetlb(page_folio(page), pagelist); + isolated = isolate_hugetlb(page_folio(page), pagelist); } else { bool lru = !__PageMovable(page); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2751bc3310fd..a256a241fd1d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -609,7 +609,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, if (flags & (MPOL_MF_MOVE_ALL) || (flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) == 1 && !hugetlb_pmd_shared(pte))) { - if (isolate_hugetlb(folio, qp->pagelist) && + if (!isolate_hugetlb(folio, qp->pagelist) && (flags & MPOL_MF_STRICT)) /* * Failed to isolate folio but allow migrating pages diff --git a/mm/migrate.c b/mm/migrate.c index 53010a142e7f..2db546a0618c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2095,6 +2095,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, struct vm_area_struct *vma; struct page *page; int err; + bool isolated; mmap_read_lock(mm); err = -EFAULT; @@ -2126,13 +2127,11 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, if (PageHuge(page)) { if (PageHead(page)) { - err = isolate_hugetlb(page_folio(page), pagelist); - if (!err) - err = 1; + isolated = isolate_hugetlb(page_folio(page), pagelist); + err = isolated ? 1 : -EBUSY; } } else { struct page *head; - bool isolated; head = compound_head(page); isolated = isolate_lru_page(head); From patchwork Wed Feb 15 10:39:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13141508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 468EEC64EC7 for ; Wed, 15 Feb 2023 10:39:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D66EC6B0074; Wed, 15 Feb 2023 05:39:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA16D6B0075; Wed, 15 Feb 2023 05:39:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6A4B6B0078; Wed, 15 Feb 2023 05:39:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A9EB26B0074 for ; Wed, 15 Feb 2023 05:39:55 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3B105161542 for ; Wed, 15 Feb 2023 10:39:55 +0000 (UTC) X-FDA: 80469180750.30.A1AB2DB Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by imf07.hostedemail.com (Postfix) with ESMTP id 21FF640016 for ; Wed, 15 Feb 2023 10:39:52 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676457593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cbCpmN2xrHB4nAObSAv4jFzFkivK7xSMH0baaVaShvo=; b=7pOWUbVOsSY3jqeNFmmmCapd18F+6BkrTYAfNh7hbrPSvOZhJflr2sge2Q15/rIsrVrLZH ZVEuTyAeRAmh/3/adO3bUPcm5p2GcLKh1A/lVzdR0G3ZYY0lKx4BeXSxs4lpJl0m4BPghL zP13ZMc68LbMOyffxWpCzKw/+TL8E1A= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676457593; a=rsa-sha256; cv=none; b=ppTgmfWs3sXf5gQUutdEdiffGdoMdY0peQMCmWO1DituHiHXXRYsgi9QzFMfuV3FMZ2KoG ARAlshGjHGsmeQrSepBac1plwY5y5pwOK0fn+1hUHACSiuPfdwSffhLuW5h3/RQV8vUwWN uvHF2nkdOdUp9MoCSvQEOW28XVCiYuo= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R541e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0VbkDfWy_1676457588; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VbkDfWy_1676457588) by smtp.aliyun-inc.com; Wed, 15 Feb 2023 18:39:49 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, sj@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, naoya.horiguchi@nec.com, linmiaohe@huawei.com, david@redhat.com, osalvador@suse.de, mike.kravetz@oracle.com, willy@infradead.org, baolin.wang@linux.alibaba.com, damon@lists.linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/4] mm: change to return bool for isolate_movable_page() Date: Wed, 15 Feb 2023 18:39:37 +0800 Message-Id: X-Mailer: git-send-email 2.27.0 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 5rxyqi1s9k4qjq6datcyquca5zjf1rs6 X-Rspam-User: X-Rspamd-Queue-Id: 21FF640016 X-Rspamd-Server: rspam06 X-HE-Tag: 1676457592-397670 X-HE-Meta: U2FsdGVkX18QLmkH5iF+GDW743+oyP+mSvxG+bxIc/NFdv0Q8VipmcEq5vVVqDCN8azOhHlbbLARI0K+O03o6v7R2t4Lw3X7Ebahyycj0jdh9ivGkHh8gmid0yyuemg2FGrRUa5hyPLZ2A5L4OBK2e+4MkTqxtGoyygzOg/GTHipJxtUundn9MKzTRasVlOwD3lI4DUJHSi5UsuE0Q3jzcSS1vBCIsqiGFYEfSlWU0CUyh9WnBg4X+fmrMcUBoErZjei0OLO91zOfQj7/ViLOhTF2crFaHzvuDAmXiSP+vF0ELkW1iQlkKwnHtTWUdfJWQDMUnYzFYsKU4q+YMWVN955VIExyaqN9KdT5rg40Shie6Kdl4hZwxSVJuINSC2xLDx9ChI4rmnxABmPm8p0w8B1CXaYSd8+hOZfRoCbkSkuLCdC9DDrwG6ZDzNgEVylT5n0Gmbz9MS2AJgbqPfn4YOqI6vJQvftQGTM7AaS1j4OzudmdUeYsNZub54bMu3vn/ck61h4WVJhtlaYygrA5m7UiChAf49a826y0xwwCyaE/3U8b98SCxuCq1F1pCuoq37wvilwsbmkZKnFGYEElU2sTjUuQ237M+HxrHw/pYxZf8A1c1/ZJahnYerLxQm8r/hCuCbJySKnmFwkxfHk2mQNulLTUv/qeOmVij/gUMzkof0rwJfRyKRXEfJFZrveANvPJf5or0eG229OuOoExn4tYwl1yunBNBapcEuR6IRXbDD8TaoA/VU8uNUSuWvMHIih17UPSiFEpkb55NmOy+aSUC5aLssFhgcxNuOzAuCtlHEWF78g3nTxeVx1zdlSvBaFD0GiWyvTyf7R7AGApir9klf795li4aIWfk9uHcfrm39e0qGGJ0LxAw3UnF3+RYeZy7lBat6iB+0jHmsC4vLmE1HqtWWuGJY76szz88k57p+zuCWEioK0QahcGh/eB+t5tq2s5pk+yv/zReZ mFmRu6Zb /YhQzBLNCW8fEpdcxreqVE7EGH6MFUrS2Rwop2HPOERYSbfR0sZ21jNGtihytLcR/JEf5ttfd4Gks04JQlBih14QQxNCSmdvy/G3o4FpPCu7YExLdGLunAZiVwYUL+lfGiDOa6lFFKrgmXnc9u3QqdH0u5WnfAc94yBQeZm0757IaKKkrEJ/cFyRutKwh9cnwQRLF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the isolate_movable_page() can only return 0 or -EBUSY, and no users will care about the negative return value, thus we can convert the isolate_movable_page() to return a boolean value to make the code more clear when checking the movable page isolation state. No functional changes intended. Signed-off-by: Baolin Wang Acked-by: David Hildenbrand Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/migrate.h | 6 +++--- mm/compaction.c | 2 +- mm/memory-failure.c | 4 ++-- mm/memory_hotplug.c | 10 +++++----- mm/migrate.c | 6 +++--- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index c88b96b48be7..6b252f519c86 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -71,7 +71,7 @@ extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason, unsigned int *ret_succeeded); extern struct page *alloc_migration_target(struct page *page, unsigned long private); -extern int isolate_movable_page(struct page *page, isolate_mode_t mode); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); @@ -92,8 +92,8 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, static inline struct page *alloc_migration_target(struct page *page, unsigned long private) { return NULL; } -static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) - { return -EBUSY; } +static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) + { return false; } static inline int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src) diff --git a/mm/compaction.c b/mm/compaction.c index d73578af44cc..ad7409f70519 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -976,7 +976,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, locked = NULL; } - if (!isolate_movable_page(page, mode)) + if (isolate_movable_page(page, mode)) goto isolate_success; } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 8604753bc644..a1ede7bdce95 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2515,8 +2515,8 @@ static bool isolate_page(struct page *page, struct list_head *pagelist) if (lru) isolated = isolate_lru_page(page); else - isolated = !isolate_movable_page(page, - ISOLATE_UNEVICTABLE); + isolated = isolate_movable_page(page, + ISOLATE_UNEVICTABLE); if (isolated) { list_add(&page->lru, pagelist); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 5fc2dcf4e3ab..bcb0dc41c2f2 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1668,18 +1668,18 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) * We can skip free pages. And we can deal with pages on * LRU and non-lru movable pages. */ - if (PageLRU(page)) { + if (PageLRU(page)) isolated = isolate_lru_page(page); - ret = isolated ? 0 : -EBUSY; - } else - ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE); - if (!ret) { /* Success */ + else + isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE); + if (isolated) { /* Success */ list_add_tail(&page->lru, &source); if (!__PageMovable(page)) inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_lru(page)); } else { + ret = -EBUSY; if (__ratelimit(&migrate_rs)) { pr_warn("failed to isolate pfn %lx\n", pfn); dump_page(page, "isolation failed"); diff --git a/mm/migrate.c b/mm/migrate.c index 2db546a0618c..9a101c7bb8ff 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -58,7 +58,7 @@ #include "internal.h" -int isolate_movable_page(struct page *page, isolate_mode_t mode) +bool isolate_movable_page(struct page *page, isolate_mode_t mode) { struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; @@ -119,14 +119,14 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) folio_set_isolated(folio); folio_unlock(folio); - return 0; + return true; out_no_isolated: folio_unlock(folio); out_putfolio: folio_put(folio); out: - return -EBUSY; + return false; } static void putback_movable_folio(struct folio *folio)