From patchwork Wed Aug 21 19:34:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13772015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A15C2C531DC for ; Wed, 21 Aug 2024 19:34:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BBBC6B011A; Wed, 21 Aug 2024 15:34:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 178CF94000B; Wed, 21 Aug 2024 15:34:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03C806B0193; Wed, 21 Aug 2024 15:34:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D0A986B018F for ; Wed, 21 Aug 2024 15:34:51 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 718851A10DD for ; Wed, 21 Aug 2024 19:34:51 +0000 (UTC) X-FDA: 82477255182.29.AEE267F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 4A3D2180011 for ; Wed, 21 Aug 2024 19:34:49 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pZIcobPr; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724268810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jKZijI013UrsY8//D8mSr9BHaGPMIkaEKCgxfvI4RII=; b=oWp1IVaymBoXhnCoxc4aZIiJRKL1l6g57flClNahmoeAjjPEkMNaxReIg+HPrJAT0Zo0R3 rSx5jAx6pmQ0mdERyee7Y2mEcsqbvA71EDxyZQINDNRmt5EU4ZzCzOIzpDkbOIM3ePDm/a bGHaex1bvFWCIwthQ+3mjmsDJz0rn0A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724268811; a=rsa-sha256; cv=none; b=12gEJfDAC8DcPw9OUusfwrQTWrTo19vJbPLJp7BtEkiTPqd9Z1kJC88l07ebQObKPmdUJh AVBo5a8+hvgRFaYU5slW/ycF51F3UTF5H4/K33FyfNW9aJzuMCt7ZoW+TkeHVtb2R7fthe Xl6BGJ9+e8lzZUi/eLxKsG1lMpjvjO0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pZIcobPr; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jKZijI013UrsY8//D8mSr9BHaGPMIkaEKCgxfvI4RII=; b=pZIcobPrDSGyDZSfT22Thcx82Y Bz8gpO59ik5Vo66lFek33lsTTg5neo+jY+1k3MpSbwRC7DBsBOoVpt6Llh+CaDmRcwl+znpv8IykP /PQhyZ55sfZ8VEbcPGccP3lnQAxzlaDwWGRmvAYvSYsBlQz6e7fHwSDfH2r1rHPnuL9mKfS397KCn iH4nGnTjp/IQtbuIuzRzHc1c9pgmrTfJCbELyFUmgE85hLGwnAqKWlOr7f1uWKYCujHd24z7VChzT KT2/dtxA1vy4WR1rSaCp+psxA7LuoEg2kI3oxTrvRSRcW0v5UakYGsMl1DfNgKs4csHeZoHn48rmo 3DDx2+Xg==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgr6V-00000009cqh-0Njj; Wed, 21 Aug 2024 19:34:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, x86@kernel.org Subject: [PATCH 04/10] mm: Remove PageSwapCache Date: Wed, 21 Aug 2024 20:34:37 +0100 Message-ID: <20240821193445.2294269-5-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240821193445.2294269-1-willy@infradead.org> References: <20240821193445.2294269-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4A3D2180011 X-Stat-Signature: rx6bucd735gakxq4g4obxrhgc6jud1i8 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724268889-263276 X-HE-Meta: U2FsdGVkX18Gt5KrpdFxT8IuAXhu+aQSNe6YAV6OyahHws5KsyU1N3x8MARzzS40UVsCflTeN5RDHbf3trA2BnqXqgabxB+300iTLcR3+CDTCLNCE3XNwtxqrLIgN8zwQ3SLLOXbymwiSbVQJpJKPpeW+jf+SN15a/I0OhNKPAGaIv7mkD2fiwh5QLH5fV1qUBY4VC3jkIc13Dq5QWrarXEvYAjM8W+PUjIWNLUFJgaC2TjaIsI7Vhtn7X9jV+CbuSWcxAz0+rshR3sRjP+f5xzJP674A6bRcnnsj0FO0oEgzZB8aN85PrLhGoEPAFSWlu9FX1eTyRLQCWAfC7ofFc4+Tz4ml7FQ2WyjpQknfkAkS/RaFeiNinmh+dA8Bmpxs10LDwlk++PT7Piv02KPH/6aHLLYW9h29xKCsHeQYwC4wQ22EKtBPf8Ge4ZEAR0+7h7s/x58MWcQKazm4535xPdNr4uMIXyzyCN3bxUMo88tK0Pw7u1k7Uht1E90Nj8tFE8KBOFhGzh/we/ZWWaudCTKo2z0ioh/tsItU4YTFYfmvYNl1G6tXtO3RPL/dAsdPCpGCU7Q/9U3iZPMo6CZVTIo3USjVXm4dKfkdaMr1cfY/Zncu6A8gZCrQgKJAIMJ2mPlOnlD+kapmNOsnWaEVMy5SyUvyZxo7UpFfRbZ8OQctOM1qSbFjwhcgMqOMjj/TSnSFLYFwjiriyM3qqfYS4+r8DTq8/ktZlXwd44UFYO8W+WWlWWz79kcSuCdNDEVv0Xk0QOyhl37sRmdtrtMeBigu8DzJ/80qbQkNp2n4v6hL6FV6c4rdwqSBPpZK3+HP8z3Ythbu0WSSwJRXXXVk0bxO1lzreimCXAxvSWWsFRoExUv22A3x9OMyos4eTEeN42sHOkMHoBdZTSkG5C65QlOyltbUVJ39NzDE1Tg+dXa7hxBRcnqgV06GxevRR+BppmE6toR0kwQdzK3Ugm 8z+0nvg8 mAfuBMK35ku5l5qDl+f7VOBbP2DEZozw+KI9GYR4oIZyuMlBqaYF+UpBlWTqu5Ix6CvkKvWkidsW9gWa8/qexeT3YCVbQPXX8Vhd1LoiPhNX1HamdFt8040mFuJUY3VH2Xu5aYQyB414irYSi8mUGwQ8gPLA46m28hXkB5WXO7g9wqDhECW6n5SkY3G9HKwNld4V+oVDgWxVjJqwiUfr7bkSwbw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This flag is now only used on folios, so we can remove all the page accessors and reword the comments that refer to them. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 2 +- include/linux/page-flags.h | 11 +++-------- mm/ksm.c | 19 ++++++++++--------- mm/migrate.c | 3 ++- mm/shmem.c | 11 ++++++----- 5 files changed, 22 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2419e60c9a7f..6e3bdf8e38bc 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -109,7 +109,7 @@ struct page { /** * @private: Mapping-private opaque data. * Usually used for buffer_heads if PagePrivate. - * Used for swp_entry_t if PageSwapCache. + * Used for swp_entry_t if swapcache flag set. * Indicates order in the buddy system if PageBuddy. */ unsigned long private; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2c2e6106682c..43a7996c53d4 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -574,15 +574,10 @@ static __always_inline bool folio_test_swapcache(const struct folio *folio) test_bit(PG_swapcache, const_folio_flags(folio, 0)); } -static __always_inline bool PageSwapCache(const struct page *page) -{ - return folio_test_swapcache(page_folio(page)); -} - -SETPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) -CLEARPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) +FOLIO_SET_FLAG(swapcache, FOLIO_HEAD_PAGE) +FOLIO_CLEAR_FLAG(swapcache, FOLIO_HEAD_PAGE) #else -PAGEFLAG_FALSE(SwapCache, swapcache) +FOLIO_FLAG_FALSE(swapcache) #endif PAGEFLAG(Unevictable, unevictable, PF_HEAD) diff --git a/mm/ksm.c b/mm/ksm.c index 8e53666bc7b0..a2e2a521df0a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -909,12 +909,13 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node, */ while (!folio_try_get(folio)) { /* - * Another check for page->mapping != expected_mapping would - * work here too. We have chosen the !PageSwapCache test to - * optimize the common case, when the page is or is about to - * be freed: PageSwapCache is cleared (under spin_lock_irq) - * in the ref_freeze section of __remove_mapping(); but Anon - * folio->mapping reset to NULL later, in free_pages_prepare(). + * Another check for folio->mapping != expected_mapping + * would work here too. We have chosen to test the + * swapcache flag to optimize the common case, when the + * folio is or is about to be freed: the swapcache flag + * is cleared (under spin_lock_irq) in the ref_freeze + * section of __remove_mapping(); but anon folio->mapping + * is reset to NULL later, in free_pages_prepare(). */ if (!folio_test_swapcache(folio)) goto stale; @@ -945,7 +946,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node, stale: /* - * We come here from above when page->mapping or !PageSwapCache + * We come here from above when folio->mapping or the swapcache flag * suggests that the node is stale; but it might be under migration. * We need smp_rmb(), matching the smp_wmb() in folio_migrate_ksm(), * before checking whether node->kpfn has been changed. @@ -1452,7 +1453,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, goto out; /* - * We need the page lock to read a stable PageSwapCache in + * We need the folio lock to read a stable swapcache flag in * write_protect_page(). We use trylock_page() instead of * lock_page() because we don't want to wait here - we * prefer to continue scanning and merging different pages, @@ -3123,7 +3124,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio) * newfolio->mapping was set in advance; now we need smp_wmb() * to make sure that the new stable_node->kpfn is visible * to ksm_get_folio() before it can see that folio->mapping - * has gone stale (or that folio_test_swapcache has been cleared). + * has gone stale (or that the swapcache flag has been cleared). */ smp_wmb(); folio_set_stable_node(folio, NULL); diff --git a/mm/migrate.c b/mm/migrate.c index 1248c89d4dbd..4f55f4930fe8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -666,7 +666,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_migrate_ksm(newfolio, folio); /* * Please do not reorder this without considering how mm/ksm.c's - * ksm_get_folio() depends upon ksm_migrate_page() and PageSwapCache(). + * ksm_get_folio() depends upon ksm_migrate_page() and the + * swapcache flag. */ if (folio_test_swapcache(folio)) folio_clear_swapcache(folio); diff --git a/mm/shmem.c b/mm/shmem.c index 22a3f3c1897e..752106aca845 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -502,8 +502,8 @@ static int shmem_replace_entry(struct address_space *mapping, * Sometimes, before we decide whether to proceed or to fail, we must check * that an entry was not already brought back from swap by a racing thread. * - * Checking page is not enough: by the time a SwapCache page is locked, it - * might be reused, and again be SwapCache, using the same swap as before. + * Checking folio is not enough: by the time a swapcache folio is locked, it + * might be reused, and again be swapcache, using the same swap as before. */ static bool shmem_confirm_swap(struct address_space *mapping, pgoff_t index, swp_entry_t swap) @@ -1940,9 +1940,10 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, if (unlikely(error)) { /* - * Is this possible? I think not, now that our callers check - * both PageSwapCache and page_private after getting page lock; - * but be defensive. Reverse old to newpage for clear and free. + * Is this possible? I think not, now that our callers + * check both the swapcache flag and folio->private + * after getting the folio lock; but be defensive. + * Reverse old to newpage for clear and free. */ old = new; } else {