From patchwork Mon Jan 18 17:01:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 135B2C433DB for ; Mon, 18 Jan 2021 17:02:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF57D222BB for ; Mon, 18 Jan 2021 17:02:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF57D222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 882D76B00EA; Mon, 18 Jan 2021 12:02:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 82FDF6B00EC; Mon, 18 Jan 2021 12:02:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 745456B00ED; Mon, 18 Jan 2021 12:02:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 5AE906B00EA for ; Mon, 18 Jan 2021 12:02:20 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F2678247F for ; Mon, 18 Jan 2021 17:02:19 +0000 (UTC) X-FDA: 77719514040.03.stick40_35111162754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 41D001787F for ; Mon, 18 Jan 2021 17:01:58 +0000 (UTC) X-HE-Tag: stick40_35111162754a X-Filterd-Recvd-Size: 4448 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:01:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GiGV59fseE/dUGuOBWZVrWWnl8FxLW/HEwRXpXDCibY=; b=U5T8xGax/StZjW8cKV+M0UUYMa ES4mcbkGJPkHuRYCaaYSOTBMPKg7gR9F25yoWnhoiBYsEjRzTYRjbPu+yLtxardh3R4kb5p6cb1dA tp/E9mV+5pxhrGUsUj0iaqtvoVhj7+yrp1MH54/eRiNw8SJC3VPvdRBC4QisJAGj5JKry2TFZaK7I AwhWPSKRPjLdvuGOMlWvUCUtREJ4k586x43/h3JOxjPSbiyi90+gzylpY0f+xpl1un3fsBkDg4y/L cCV9KHhGVMO+D/KN1D7HPJnhC9zEN0asFqLtsa8XAKhmDtMZuvarzSS5wKlJwl2eZMUfzgcPghtcs Rk7MHMJw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XuZ-00D7HK-6p; Mon, 18 Jan 2021 17:01:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 01/27] mm: Introduce struct folio Date: Mon, 18 Jan 2021 17:01:22 +0000 Message-Id: <20210118170148.3126186-2-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have trouble keeping track of whether we've already called compound_head() to ensure we're not operating on a tail page. Further, it's never clear whether we intend a struct page to refer to PAGE_SIZE bytes or page_size(compound_head(page)). Introduce a new type 'struct folio' that always refers to an entire (possibly compound) page, and points to the head page (or base page). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 26 ++++++++++++++++++++++++++ include/linux/mm_types.h | 17 +++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index a5d618d08506..0858af6479a3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -924,6 +924,11 @@ static inline unsigned int compound_order(struct page *page) return page[1].compound_order; } +static inline unsigned int folio_order(struct folio *folio) +{ + return compound_order(&folio->page); +} + static inline bool hpage_pincount_available(struct page *page) { /* @@ -975,6 +980,26 @@ static inline unsigned int page_shift(struct page *page) void free_compound_page(struct page *page); +static inline unsigned long folio_nr_pages(struct folio *folio) +{ + return compound_nr(&folio->page); +} + +static inline struct folio *next_folio(struct folio *folio) +{ + return folio + folio_nr_pages(folio); +} + +static inline unsigned int folio_shift(struct folio *folio) +{ + return PAGE_SHIFT + folio_order(folio); +} + +static inline size_t folio_size(struct folio *folio) +{ + return PAGE_SIZE << folio_order(folio); +} + #ifdef CONFIG_MMU /* * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when @@ -1615,6 +1640,7 @@ extern void pagefault_out_of_memory(void); #define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) #define offset_in_thp(page, p) ((unsigned long)(p) & (thp_size(page) - 1)) +#define offset_in_folio(folio, p) ((unsigned long)(p) & (folio_size(folio) - 1)) /* * Flags passed to show_mem() and show_free_areas() to suppress output in diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 07d9acb5b19c..875dc6cd6ad2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -223,6 +223,23 @@ struct page { #endif } _struct_page_alignment; +/* + * A struct folio is either a base (order-0) page or the head page of + * a compound page. + */ +struct folio { + struct page page; +}; + +static inline struct folio *page_folio(struct page *page) +{ + unsigned long head = READ_ONCE(page->compound_head); + + if (unlikely(head & 1)) + return (struct folio *)(head - 1); + return (struct folio *)page; +} + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; From patchwork Mon Jan 18 17:01:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2367C433DB for ; Mon, 18 Jan 2021 17:02:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B85DB22C7E for ; Mon, 18 Jan 2021 17:02:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B85DB22C7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C272E6B0099; Mon, 18 Jan 2021 12:02:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD82A6B00A7; Mon, 18 Jan 2021 12:02:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B161B6B00B5; Mon, 18 Jan 2021 12:02:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 9B3926B0099 for ; Mon, 18 Jan 2021 12:02:02 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 586B0180AD801 for ; Mon, 18 Jan 2021 17:02:02 +0000 (UTC) X-FDA: 77719513284.08.title54_0f0422e2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 3350E1819E772 for ; Mon, 18 Jan 2021 17:02:02 +0000 (UTC) X-HE-Tag: title54_0f0422e2754a X-Filterd-Recvd-Size: 2347 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:01:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tpAumgWkpogmmuxrzm5R2ZvMmB2t3bvlyOOtc+rjXKI=; b=L401bso+MVuwSw2K1byR5GsuPn X/qA/RfHWxoWBUbGbjlUNrm2aTxa4otA2wWqP0BXW2y6ns/UF2nMKxoIQAAubCc48kaSYMFA2G8Ag fxUJOWsXaw1wls2DjllhYVwZ3dEi2QFb4vxOSVqZNAk3fU81HO0qJ/UNave4yTrR46ADrShITLvb4 sBbAqRJGlx0Gf+txZ8DS6Oc4pby/tFFq1lsIIju7bth7sqMCYtSYhhvMF6rm4ErvuC35piC+qnmVO piRIgsYBebjlQ8pb5EZt7uZvYRMUp0ZtDpiBOZerVZUiqXV3/Fju7Jp+I3HS3jmWe3miUNSueGcWT osRWU+ow==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xub-00D7Hc-Fa; Mon, 18 Jan 2021 17:01:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 02/27] mm: Add folio_pgdat Date: Mon, 18 Jan 2021 17:01:23 +0000 Message-Id: <20210118170148.3126186-3-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is just a convenience wrapper for callers with folios; pgdat can be reached from tail pages as well as head pages. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0858af6479a3..5b071c226fd6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1500,6 +1500,11 @@ static inline pg_data_t *page_pgdat(const struct page *page) return NODE_DATA(page_to_nid(page)); } +static inline pg_data_t *folio_pgdat(const struct folio *folio) +{ + return page_pgdat(&folio->page); +} + #ifdef SECTION_IN_PAGE_FLAGS static inline void set_page_section(struct page *page, unsigned long section) { From patchwork Mon Jan 18 17:01:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4522C433DB for ; Mon, 18 Jan 2021 17:02:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D447222BB for ; Mon, 18 Jan 2021 17:02:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D447222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1C57E6B00FD; Mon, 18 Jan 2021 12:02:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 176AF6B00FF; Mon, 18 Jan 2021 12:02:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03AED6B0102; Mon, 18 Jan 2021 12:02:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id E4DF96B00FD for ; Mon, 18 Jan 2021 12:02:52 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9F1AD180ACEEB for ; Mon, 18 Jan 2021 17:02:52 +0000 (UTC) X-FDA: 77719515384.28.skin50_2001f292754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 2D6915DE0 for ; Mon, 18 Jan 2021 17:02:05 +0000 (UTC) X-HE-Tag: skin50_2001f292754a X-Filterd-Recvd-Size: 4195 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pHmMPYJfZBRLbAE614BrSgF4Qt/dpevDIb6z8WYnHzg=; b=upDCq41TAfuJW199l8kNMMsOjq VGVd3mp6l0SS6Xm2saByaP4FkNxCxDqC5Te2JUP5F1uY3sZ9vV8GXZaK9kvYzp51TuJTN3Jh8xJ4m 3hCnkl7bS8m0lOIYzNeVGh3CkCBUGJ29mQpy/Bkpdht7dxfGjPq8K3wlEK6vV+MmRmiR6P20FWusj sKW4huCk47QVjjeB9wjTyLbNCQUHRYBByzxIcD15PhqyZ/ly97A25h+BltJpA7JZyrnIVYKHrEBgQ euAbCzEymx86TS68eo/aiA1EUFb4JlWrqWB9i8dsyVS1u7jxiIDSHL5aFnnOMgkR6qh1OLwhmk8r4 uMbgrZQw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuc-00D7Hr-ML; Mon, 18 Jan 2021 17:01:55 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 03/27] mm/vmstat: Add folio stat wrappers Date: Mon, 18 Jan 2021 17:01:24 +0000 Message-Id: <20210118170148.3126186-4-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow page counters to be more readily modified by callers which have a folio. Name these wrappers with 'stat' instead of 'state' as requested by Linus here: https://lore.kernel.org/linux-mm/CAHk-=wj847SudR-kt+46fT3+xFFgiwpgThvm7DJWGdi4cVrbnQ@mail.gmail.com/ Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/vmstat.h | 60 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 773135fc6e19..3c3373c2c3c2 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -396,6 +396,54 @@ static inline void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset) { } #endif /* CONFIG_SMP */ +static inline +void __inc_zone_folio_stat(struct folio *folio, enum zone_stat_item item) +{ + __inc_zone_page_state(&folio->page, item); +} + +static inline +void __dec_zone_folio_stat(struct folio *folio, enum zone_stat_item item) +{ + __dec_zone_page_state(&folio->page, item); +} + +static inline +void inc_zone_folio_stat(struct folio *folio, enum zone_stat_item item) +{ + inc_zone_page_state(&folio->page, item); +} + +static inline +void dec_zone_folio_stat(struct folio *folio, enum zone_stat_item item) +{ + dec_zone_page_state(&folio->page, item); +} + +static inline +void __inc_node_folio_stat(struct folio *folio, enum node_stat_item item) +{ + __inc_node_page_state(&folio->page, item); +} + +static inline +void __dec_node_folio_stat(struct folio *folio, enum node_stat_item item) +{ + __dec_node_page_state(&folio->page, item); +} + +static inline +void inc_node_folio_stat(struct folio *folio, enum node_stat_item item) +{ + inc_node_page_state(&folio->page, item); +} + +static inline +void dec_node_folio_stat(struct folio *folio, enum node_stat_item item) +{ + dec_node_page_state(&folio->page, item); +} + static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, int migratetype) { @@ -530,6 +578,18 @@ static inline void __dec_lruvec_page_state(struct page *page, __mod_lruvec_page_state(page, idx, -1); } +static inline void __inc_lruvec_folio_stat(struct folio *folio, + enum node_stat_item idx) +{ + __mod_lruvec_page_state(&folio->page, idx, 1); +} + +static inline void __dec_lruvec_folio_stat(struct folio *folio, + enum node_stat_item idx) +{ + __mod_lruvec_page_state(&folio->page, idx, -1); +} + static inline void inc_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx) { From patchwork Mon Jan 18 17:01:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73029C433E0 for ; Mon, 18 Jan 2021 17:02:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17D54222BB for ; Mon, 18 Jan 2021 17:02:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17D54222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A7D866B00A7; Mon, 18 Jan 2021 12:02:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B6F36B00BD; Mon, 18 Jan 2021 12:02:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F5736B00D1; Mon, 18 Jan 2021 12:02:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6566A6B00A7 for ; Mon, 18 Jan 2021 12:02:03 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1F517180AD5DA for ; Mon, 18 Jan 2021 17:02:03 +0000 (UTC) X-FDA: 77719513326.07.wind46_2a1574d2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id F0DD21803F9AD for ; Mon, 18 Jan 2021 17:02:02 +0000 (UTC) X-HE-Tag: wind46_2a1574d2754a X-Filterd-Recvd-Size: 3779 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+DOkCmek4HHcJHEf17y14LMV7qJyfgtPXciPHOMgp4c=; b=RseyoOWNsj/IsGNXM8mXh73BJG 9RSOaS1o1cXyZNqUIEDm5cb2sDKqoz3OsR4isb4MqtTTVoWjDYuC7+KMe8HrG0++qG3LnL5wrPrni NyRTvOHazgtIVE39BdPwh+CZfEjzePatjYOmRQstHhzpRTAspLvd0559NKPi7iwJmYtXw8yy0KbU/ mEIl+FkrZVTFX0ax7P+Amr3ig2DbNwT69UX2hv2QvKtslYU6nXhBx3GZtnqGuv7IAvx/Cqo6weB5T LOHQ+gRgNwc+cIcItJeWf7eLC9Q0tFdAWvdbqymtNUP+0DVNh2qgHwMfI2mqlj3bE25KwJTWmWP+R afxDj3IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xue-00D7I2-V4; Mon, 18 Jan 2021 17:01:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 04/27] mm/debug: Add VM_BUG_ON_FOLIO and VM_WARN_ON_ONCE_FOLIO Date: Mon, 18 Jan 2021 17:01:25 +0000 Message-Id: <20210118170148.3126186-5-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mmdebug.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 5d0767cb424a..77d24e1dcaec 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -23,6 +23,13 @@ void dump_mm(const struct mm_struct *mm); BUG(); \ } \ } while (0) +#define VM_BUG_ON_FOLIO(cond, folio) \ + do { \ + if (unlikely(cond)) { \ + dump_page(&folio->page, "VM_BUG_ON_FOLIO(" __stringify(cond)")");\ + BUG(); \ + } \ + } while (0) #define VM_BUG_ON_VMA(cond, vma) \ do { \ if (unlikely(cond)) { \ @@ -48,6 +55,17 @@ void dump_mm(const struct mm_struct *mm); } \ unlikely(__ret_warn_once); \ }) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ + static bool __section(".data.once") __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(&folio->page, "VM_WARN_ON_ONCE_FOLIO(" __stringify(cond)")");\ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) @@ -56,11 +74,13 @@ void dump_mm(const struct mm_struct *mm); #else #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond) +#define VM_BUG_ON_FOLIO(cond, folio) VM_BUG_ON(cond) #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond) #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif From patchwork Mon Jan 18 17:01:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D2DAC433DB for ; Mon, 18 Jan 2021 17:02:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 286FD222BB for ; Mon, 18 Jan 2021 17:02:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 286FD222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E78D16B00EC; Mon, 18 Jan 2021 12:02:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DB2306B00EE; Mon, 18 Jan 2021 12:02:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C25DD6B00EF; Mon, 18 Jan 2021 12:02:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id A7C696B00EC for ; Mon, 18 Jan 2021 12:02:21 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6444E8249980 for ; Mon, 18 Jan 2021 17:02:21 +0000 (UTC) X-FDA: 77719514082.02.road11_1f0b88a2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 4381B10097AA1 for ; Mon, 18 Jan 2021 17:02:21 +0000 (UTC) X-HE-Tag: road11_1f0b88a2754a X-Filterd-Recvd-Size: 2965 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tz7wOz4RYnvira2KM0oAvqe1K89FuiXPaOsClj1QfaY=; b=UdiLz5uCBA35aOz3zxtIadKfF3 Es7l9PkXjAj1LOxpxGbBBaiKCH9ahgv7KikSm97OyOo4w+AIVcdFobqjPPIQ3HgV5BinxNBF/7lyC 4IN83JD3apatXeC4EfE73qjQcjQn5nZzPIFWHPtA8O+rUqfI7MOm/qb+RA7ojy10bnNS8eXFGRQI2 VMhCa47ifUtCbYUlvRXSNGXL8Tp/maWjVsFpItTLvOPvFncvNQl/dDW3th1N3jdLxLWoqKdCeir/u SbGFIqBbEyfjPx7UrrJ39cMjSD2+BJh3XTBwPu6NPRCTVmAdenQ30pCnY4Mwr8tdz/WOOmbVwW1rO xltHLlmg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xug-00D7II-HS; Mon, 18 Jan 2021 17:01:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 05/27] mm: Add put_folio Date: Mon, 18 Jan 2021 17:01:26 +0000 Message-Id: <20210118170148.3126186-6-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call put_folio() instead of put_page() and save the overhead of calling compound_head(). Also skips the devmap checks. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5b071c226fd6..4d135b62a2b6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1217,9 +1217,15 @@ static inline __must_check bool try_get_page(struct page *page) return true; } +static inline void put_folio(struct folio *folio) +{ + if (put_page_testzero(&folio->page)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { - page = compound_head(page); + struct folio *folio = page_folio(page); /* * For devmap managed pages we need to catch refcount transition from @@ -1227,13 +1233,12 @@ static inline void put_page(struct page *page) * need to inform the device driver through callback. See * include/linux/memremap.h and HMM for details. */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); + if (page_is_devmap_managed(&folio->page)) { + put_devmap_managed_page(&folio->page); return; } - if (put_page_testzero(page)) - __put_page(page); + put_folio(folio); } /* From patchwork Mon Jan 18 17:01:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFA01C433DB for ; Mon, 18 Jan 2021 17:02:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 809C6222BB for ; Mon, 18 Jan 2021 17:02:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 809C6222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B15946B00E5; Mon, 18 Jan 2021 12:02:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A9F846B00E8; Mon, 18 Jan 2021 12:02:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96A6D6B00E9; Mon, 18 Jan 2021 12:02:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 7E0856B00E5 for ; Mon, 18 Jan 2021 12:02:12 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3D5988249980 for ; Mon, 18 Jan 2021 17:02:12 +0000 (UTC) X-FDA: 77719513704.16.cable77_42000ef2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id DCDD9100E6912 for ; Mon, 18 Jan 2021 17:02:11 +0000 (UTC) X-HE-Tag: cable77_42000ef2754a X-Filterd-Recvd-Size: 3069 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WgNdplJNqnnOYl1riVEoZNoIZGy5sqydAea+QriKwC8=; b=hqFOZKIykRiuK22uV9CHZGGjMy m0tgyY5WePVEghFx5I2EiG37sQNLbUaBqizPy61VXorPOeL++cg7oUPcrfUOVtTocor/3D38XFIeh HUXGVHaoWSANy3gSnCk/VN8eLgO4E79nVRG/Ky9yvxa/AqxvamuC30eJthnA2/U+wTE+norPTWVqs 9Mc6ru2SySTjvVKHn25BM5JsRzyFhwT7HQhM3xDLGarAwcuMtbsVrWOb9mOVdjDFDCV7KMiZv6BFb FE9Kzu5XIIkoXfzQDLZk9y+vJxNh4esUJC7lDSLgh84dKb7YiyTUhtOLiNXfvvZ5pOzv8n9PPmIJu IVgK53vA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuh-00D7IS-TP; Mon, 18 Jan 2021 17:02:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 06/27] mm: Add get_folio Date: Mon, 18 Jan 2021 17:01:27 +0000 Message-Id: <20210118170148.3126186-7-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call get_folio() instead of get_page() and save the overhead of calling compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4d135b62a2b6..380328930d6c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1192,18 +1192,19 @@ static inline bool is_pci_p2pdma_page(const struct page *page) } /* 127: arbitrary random number, small enough to assemble well */ -#define page_ref_zero_or_close_to_overflow(page) \ - ((unsigned int) page_ref_count(page) + 127u <= 127u) +#define folio_ref_zero_or_close_to_overflow(folio) \ + ((unsigned int) page_ref_count(&folio->page) + 127u <= 127u) + +static inline void get_folio(struct folio *folio) +{ + /* Getting a page requires an already elevated page->_refcount. */ + VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio); + page_ref_inc(&folio->page); +} static inline void get_page(struct page *page) { - page = compound_head(page); - /* - * Getting a normal page or the head of a compound page - * requires to already have an elevated page->_refcount. - */ - VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); - page_ref_inc(page); + get_folio(page_folio(page)); } bool __must_check try_grab_page(struct page *page, unsigned int flags); From patchwork Mon Jan 18 17:01:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B5F6C4332B for ; Mon, 18 Jan 2021 17:02:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 13A5622C9D for ; Mon, 18 Jan 2021 17:02:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13A5622C9D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 794AC6B00E1; Mon, 18 Jan 2021 12:02:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76F896B00E3; Mon, 18 Jan 2021 12:02:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 636586B00E4; Mon, 18 Jan 2021 12:02:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 48E0F6B00E1 for ; Mon, 18 Jan 2021 12:02:08 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 083188249980 for ; Mon, 18 Jan 2021 17:02:08 +0000 (UTC) X-FDA: 77719513536.19.wish34_2b12dd62754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 89C9F1ACC34 for ; Mon, 18 Jan 2021 17:02:07 +0000 (UTC) X-HE-Tag: wish34_2b12dd62754a X-Filterd-Recvd-Size: 11773 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+OMXpSIawdt3ON2MMfpVt8yW82lZEDpJ5/FF00XqqXg=; b=SbJWKUNtknD/rB8Oqmnk4xo7XS 8mR2/FlWwzQ7Oz/TpBOrlxFz3OPzdIPnVh2B+u/xCDu2c88wKEZlhnOpE7MSeZlwBsrWNyJl1izD1 8DLNsLVlHryrwbYmJIOsGC9HqbPm9xDOTzpbV4TYQ6hyJjr4jBC+7lakJrM2rjHaPpm+m3Ssr2psn tFtYsfeK15C8iTXqicKKk1dV2JpP/+QxbyUAk4AOEZ61KPz+WblkKcA4q3sfosA3v6k7J4NcQRDWP qIBGh4smMYfSy6RKfNgGRdW2VEYBdkxP7T3q35lTWz1W3wAmUVIneJTl5Y2ehIXt+gkkXQRtUJnpA sScW7RxQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xui-00D7IY-Um; Mon, 18 Jan 2021 17:02:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 07/27] mm: Create FolioFlags Date: Mon, 18 Jan 2021 17:01:28 +0000 Message-Id: <20210118170148.3126186-8-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These new functions are the folio analogues of the PageFlags functions. If CONFIG_DEBUG_VM_PGFLAGS is enabled, we check the folio is not a tail page at every invocation. Note that this will also catch the PagePoisoned case as a poisoned page has every bit set, which would include PageTail. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/fscache.h | 6 +++ include/linux/page-flags.h | 104 ++++++++++++++++++++++++++++++------- 2 files changed, 90 insertions(+), 20 deletions(-) diff --git a/include/linux/fscache.h b/include/linux/fscache.h index a1c928fe98e7..f1a5eddaa2c0 100644 --- a/include/linux/fscache.h +++ b/include/linux/fscache.h @@ -39,6 +39,12 @@ #define TestSetPageFsCache(page) TestSetPagePrivate2((page)) #define TestClearPageFsCache(page) TestClearPagePrivate2((page)) +#define FolioFsCache(folio) FolioPrivate2((folio)) +#define SetFolioFsCache(folio) SetFolioPrivate2((folio)) +#define ClearFolioFsCache(folio) ClearFolioPrivate2((folio)) +#define TestSetFolioFsCache(folio) TestSetFolioPrivate2((folio)) +#define TestClearFolioFsCache(folio) TestClearFolioPrivate2((folio)) + /* pattern used to fill dead space in an index entry */ #define FSCACHE_INDEX_DEADFILL_PATTERN 0x79 diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index bc6fd1ee7dd6..ef0f68320917 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -212,6 +212,12 @@ static inline void page_init_poison(struct page *page, size_t size) } #endif +static unsigned long *folio_flags(struct folio *folio) +{ + VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); + return &folio->page.flags; +} + /* * Page flags policies wrt compound pages * @@ -260,30 +266,44 @@ static inline void page_init_poison(struct page *page, size_t size) * Macros to create function definitions for page flags */ #define TESTPAGEFLAG(uname, lname, policy) \ +static __always_inline int Folio##uname(struct folio *folio) \ + { return test_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } #define SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void SetFolio##uname(struct folio *folio) \ + { set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void SetPage##uname(struct page *page) \ { set_bit(PG_##lname, &policy(page, 1)->flags); } #define CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void ClearFolio##uname(struct folio *folio) \ + { clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void ClearPage##uname(struct page *page) \ { clear_bit(PG_##lname, &policy(page, 1)->flags); } #define __SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void __SetFolio##uname(struct folio *folio) \ + { __set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void __SetPage##uname(struct page *page) \ { __set_bit(PG_##lname, &policy(page, 1)->flags); } #define __CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void __ClearFolio##uname(struct folio *folio) \ + { __clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void __ClearPage##uname(struct page *page) \ { __clear_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTSETFLAG(uname, lname, policy) \ +static __always_inline int TestSetFolio##uname(struct folio *folio) \ + { return test_and_set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int TestSetPage##uname(struct page *page) \ { return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTCLEARFLAG(uname, lname, policy) \ +static __always_inline int TestClearFolio##uname(struct folio *folio) \ + { return test_and_clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int TestClearPage##uname(struct page *page) \ { return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } @@ -302,21 +322,27 @@ static __always_inline int TestClearPage##uname(struct page *page) \ TESTCLEARFLAG(uname, lname, policy) #define TESTPAGEFLAG_FALSE(uname) \ +static inline int Folio##uname(const struct folio *folio) { return 0; } \ static inline int Page##uname(const struct page *page) { return 0; } #define SETPAGEFLAG_NOOP(uname) \ +static inline void SetFolio##uname(struct folio *folio) { } \ static inline void SetPage##uname(struct page *page) { } #define CLEARPAGEFLAG_NOOP(uname) \ +static inline void ClearFolio##uname(struct folio *folio) { } \ static inline void ClearPage##uname(struct page *page) { } #define __CLEARPAGEFLAG_NOOP(uname) \ +static inline void __ClearFolio##uname(struct folio *folio) { } \ static inline void __ClearPage##uname(struct page *page) { } #define TESTSETFLAG_FALSE(uname) \ +static inline int TestSetFolio##uname(struct folio *folio) { return 0; } \ static inline int TestSetPage##uname(struct page *page) { return 0; } #define TESTCLEARFLAG_FALSE(uname) \ +static inline int TestClearFolio##uname(struct folio *folio) { return 0; } \ static inline int TestClearPage##uname(struct page *page) { return 0; } #define PAGEFLAG_FALSE(uname) TESTPAGEFLAG_FALSE(uname) \ @@ -393,14 +419,18 @@ PAGEFLAG_FALSE(HighMem) #endif #ifdef CONFIG_SWAP -static __always_inline int PageSwapCache(struct page *page) +static __always_inline bool FolioSwapCache(struct folio *folio) { -#ifdef CONFIG_THP_SWAP - page = compound_head(page); -#endif - return PageSwapBacked(page) && test_bit(PG_swapcache, &page->flags); + return FolioSwapBacked(folio) && + test_bit(PG_swapcache, folio_flags(folio)); } + +static __always_inline bool PageSwapCache(struct page *page) +{ + return FolioSwapCache(page_folio(page)); +} + SETPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) CLEARPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) #else @@ -478,10 +508,14 @@ static __always_inline int PageMappingFlags(struct page *page) return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } -static __always_inline int PageAnon(struct page *page) +static __always_inline bool FolioAnon(struct folio *folio) { - page = compound_head(page); - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return ((unsigned long)folio->page.mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline bool PageAnon(struct page *page) +{ + return FolioAnon(page_folio(page)); } static __always_inline int __PageMovable(struct page *page) @@ -509,18 +543,16 @@ TESTPAGEFLAG_FALSE(Ksm) u64 stable_page_flags(struct page *page); -static inline int PageUptodate(struct page *page) +static inline int FolioUptodate(struct folio *folio) { - int ret; - page = compound_head(page); - ret = test_bit(PG_uptodate, &(page)->flags); + int ret = test_bit(PG_uptodate, folio_flags(folio)); /* * Must ensure that the data we read out of the page is loaded * _after_ we've loaded page->flags to check for PageUptodate. * We can skip the barrier if the page is not uptodate, because * we wouldn't be reading anything from it. * - * See SetPageUptodate() for the other side of the story. + * See SetFolioUptodate() for the other side of the story. */ if (ret) smp_rmb(); @@ -528,23 +560,36 @@ static inline int PageUptodate(struct page *page) return ret; } -static __always_inline void __SetPageUptodate(struct page *page) +static inline int PageUptodate(struct page *page) +{ + return FolioUptodate(page_folio(page)); +} + +static __always_inline void __SetFolioUptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); smp_wmb(); - __set_bit(PG_uptodate, &page->flags); + __set_bit(PG_uptodate, folio_flags(folio)); } -static __always_inline void SetPageUptodate(struct page *page) +static __always_inline void SetFolioUptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); /* * Memory barrier must be issued before setting the PG_uptodate bit, * so that all previous stores issued in order to bring the page * uptodate are actually visible before PageUptodate becomes true. */ smp_wmb(); - set_bit(PG_uptodate, &page->flags); + set_bit(PG_uptodate, folio_flags(folio)); +} + +static __always_inline void __SetPageUptodate(struct page *page) +{ + __SetFolioUptodate((struct folio *)page); +} + +static __always_inline void SetPageUptodate(struct page *page) +{ + SetFolioUptodate((struct folio *)page); } CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) @@ -569,6 +614,17 @@ static inline void set_page_writeback_keepwrite(struct page *page) __PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY) +/* Whether there are one or multiple pages in a folio */ +static inline bool FolioSingle(struct folio *folio) +{ + return !FolioHead(folio); +} + +static inline bool FolioMulti(struct folio *folio) +{ + return FolioHead(folio); +} + static __always_inline void set_compound_head(struct page *page, struct page *head) { WRITE_ONCE(page->compound_head, (unsigned long)head + 1); @@ -593,6 +649,10 @@ static inline void ClearPageCompound(struct page *page) int PageHuge(struct page *page); int PageHeadHuge(struct page *page); bool page_huge_active(struct page *page); +static inline bool FolioHuge(struct folio *folio) +{ + return PageHeadHuge(&folio->page); +} #else TESTPAGEFLAG_FALSE(Huge) TESTPAGEFLAG_FALSE(HeadHuge) @@ -603,7 +663,6 @@ static inline bool page_huge_active(struct page *page) } #endif - #ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * PageHuge() only returns true for hugetlbfs pages, but not for @@ -850,6 +909,11 @@ static inline int page_has_private(struct page *page) return !!(page->flags & PAGE_FLAGS_PRIVATE); } +static inline bool folio_has_private(struct folio *folio) +{ + return page_has_private(&folio->page); +} + #undef PF_ANY #undef PF_HEAD #undef PF_ONLY_HEAD From patchwork Mon Jan 18 17:01:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08253C433DB for ; Mon, 18 Jan 2021 17:02:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A331B222BB for ; Mon, 18 Jan 2021 17:02:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A331B222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D8A46B00F0; Mon, 18 Jan 2021 12:02:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23AA36B00F3; Mon, 18 Jan 2021 12:02:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12B036B00F4; Mon, 18 Jan 2021 12:02:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id F057D6B00F0 for ; Mon, 18 Jan 2021 12:02:23 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B9B30181AF5C7 for ; Mon, 18 Jan 2021 17:02:23 +0000 (UTC) X-FDA: 77719514166.19.brake70_02115832754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 6D7E61ACC42 for ; Mon, 18 Jan 2021 17:02:20 +0000 (UTC) X-HE-Tag: brake70_02115832754a X-Filterd-Recvd-Size: 6297 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ufUXMOs3Jho3uC8zMy9YfnubEmaG7VdjjL2WDlyXjJo=; b=Pbm/dHPE9HRE7Ev8NizoeIXTaT NAHEBO4KsU6YDBQa9rUeUsRW7B1Khz4I1jGWxMosgFlJgKZZNHhHuImVQN0OXM8rEdjTEYtcmiWBj iyl04UWqpxrBQ9sLnXLdR6PymZnZ9MhI/0tSKsBOHH6HW6s0Xid1pcRH8ACsJn4zqynLqk+QXmfcR dY4cNp1fa7UiEKOsxr6atvf0nBtCe0aluknhb3iHNoTz4Cc3YGO1XiNmjpWgBKxSvsSr2nL1KRE8l 6q4ZXap3CGflpvCOEw7T0jids5AU5nI9InzuBAKEkDu0QbECgnyYADJp10ZgSoRS1ewnu3hmByAU/ Z9Z95LaA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuk-00D7Ie-1n; Mon, 18 Jan 2021 17:02:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 08/27] mm: Handle per-folio private data Date: Mon, 18 Jan 2021 17:01:29 +0000 Message-Id: <20210118170148.3126186-9-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add folio_private() and set_folio_private() which mirror page_private() and set_page_private() -- ie folio private data is the same as page private data. Turn attach_page_private() into attach_folio_private() and reimplement attach_page_private() as a wrapper. No filesystem which uses page private data currently supports compound pages, so we're free to define the rules. attach_page_private() may only be called on a head page; if you want to add private data to a tail page, you can call set_page_private() directly (and shouldn't increment the page refcount! That should be done when adding private data to the head page / folio). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 16 ++++++++++++++ include/linux/pagemap.h | 48 ++++++++++++++++++++++++---------------- 2 files changed, 45 insertions(+), 19 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 875dc6cd6ad2..750184130074 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -258,6 +258,12 @@ static inline atomic_t *compound_pincount_ptr(struct page *page) #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +/* + * page_private can be used on tail pages. However, PagePrivate is only + * checked by the VM on the head page. So page_private on the tail pages + * should be used for data that's ancillary to the head page (eg attaching + * buffer heads to tail pages after attaching buffer heads to the head page) + */ #define page_private(page) ((page)->private) static inline void set_page_private(struct page *page, unsigned long private) @@ -265,6 +271,16 @@ static inline void set_page_private(struct page *page, unsigned long private) page->private = private; } +static inline unsigned long folio_private(struct folio *folio) +{ + return folio->page.private; +} + +static inline void set_folio_private(struct folio *folio, unsigned long v) +{ + folio->page.private = v; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 4317f34866c7..a739ada01d27 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -245,42 +245,52 @@ static inline int page_cache_add_speculative(struct page *page, int count) } /** - * attach_page_private - Attach private data to a page. - * @page: Page to attach data to. - * @data: Data to attach to page. + * attach_folio_private - Attach private data to a folio. + * @folio: Folio to attach data to. + * @data: Data to attach to folio. * - * Attaching private data to a page increments the page's reference count. - * The data must be detached before the page will be freed. + * Attaching private data to a folio increments the page's reference count. + * The data must be detached before the folio will be freed. */ -static inline void attach_page_private(struct page *page, void *data) +static inline void attach_folio_private(struct folio *folio, void *data) { - get_page(page); - set_page_private(page, (unsigned long)data); - SetPagePrivate(page); + get_folio(folio); + set_folio_private(folio, (unsigned long)data); + SetFolioPrivate(folio); } /** - * detach_page_private - Detach private data from a page. - * @page: Page to detach data from. + * detach_folio_private - Detach private data from a folio. + * @folio: Folio to detach data from. * - * Removes the data that was previously attached to the page and decrements + * Removes the data that was previously attached to the folio and decrements * the refcount on the page. * - * Return: Data that was attached to the page. + * Return: Data that was attached to the folio. */ -static inline void *detach_page_private(struct page *page) +static inline void *detach_folio_private(struct folio *folio) { - void *data = (void *)page_private(page); + void *data = (void *)folio_private(folio); - if (!PagePrivate(page)) + if (!FolioPrivate(folio)) return NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - put_page(page); + ClearFolioPrivate(folio); + set_folio_private(folio, 0); + put_folio(folio); return data; } +static inline void attach_page_private(struct page *page, void *data) +{ + attach_folio_private((struct folio *)page, data); +} + +static inline void *detach_page_private(struct page *page) +{ + return detach_folio_private((struct folio *)page); +} + #ifdef CONFIG_NUMA extern struct page *__page_cache_alloc(gfp_t gfp); #else From patchwork Mon Jan 18 17:01:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0C46C433E0 for ; Mon, 18 Jan 2021 17:02:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9C6AD222BB for ; Mon, 18 Jan 2021 17:02:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C6AD222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 289706B00FF; Mon, 18 Jan 2021 12:02:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 239956B0103; Mon, 18 Jan 2021 12:02:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14EC06B0106; Mon, 18 Jan 2021 12:02:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id F01506B00FF for ; Mon, 18 Jan 2021 12:02:57 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A72C41E0A for ; Mon, 18 Jan 2021 17:02:57 +0000 (UTC) X-FDA: 77719515594.20.boys01_580af6b2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id C5AB118025077 for ; Mon, 18 Jan 2021 17:02:18 +0000 (UTC) X-HE-Tag: boys01_580af6b2754a X-Filterd-Recvd-Size: 3174 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E7LRD3fcVDvsB9HUN1R1ioAxMdx8PfnLFsCs+5WgdVQ=; b=rZMooSdA5c2cL0uXzBL4v7sPp7 DFYAs0zP6BIJeXpvgNbIXfluIGXtX7kO+irPq/FagQNECqXOFfk2Q6Hf0X/BNYOrbiTbnIubADB/1 DKQa8lJ6aK5p2fuus52/CshrwVKesDa3MzaWZW/mAwcDEANYw8uoT2fd4siKjk6a6iGpl58mf86KI h+BpUt+5zv2zIRYVXrZ27BJuWeAgyjI8FSFSUPCLG0TWoOe1ASFCKfmzlZa1VxjRRiwD77LoUmxSp uE0oNbI23A7MFnbVctloLsgo9Oel3a9eAmDB2dLQS22L1xt3+tcTkxHrmJ4O5WXaVUIUi2M71FWlj 9urk5BUg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xul-00D7Io-Ds; Mon, 18 Jan 2021 17:02:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 09/27] mm: Add folio_index, folio_page and folio_contains Date: Mon, 18 Jan 2021 17:01:30 +0000 Message-Id: <20210118170148.3126186-10-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: folio_index() is the equivalent of page_index() for folios. folio_page() finds the page in a folio for a page cache index. folio_contains() tells you whether a folio contains a particular page cache index. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a739ada01d27..c27b74c63b5e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -447,6 +447,29 @@ static inline bool thp_contains(struct page *head, pgoff_t index) return page_index(head) == (index & ~(thp_nr_pages(head) - 1UL)); } +static inline pgoff_t folio_index(struct folio *folio) +{ + if (unlikely(FolioSwapCache(folio))) + return __page_file_index(&folio->page); + return folio->page.index; +} + +static inline struct page *folio_page(struct folio *folio, pgoff_t index) +{ + index -= folio_index(folio); + VM_BUG_ON_FOLIO(index >= folio_nr_pages(folio), folio); + return &folio->page + index; +} + +/* Does this folio contain this index? */ +static inline bool folio_contains(struct folio *folio, pgoff_t index) +{ + /* HugeTLBfs indexes the page cache in units of hpage_size */ + if (PageHuge(&folio->page)) + return folio->page.index == index; + return index - folio_index(folio) < folio_nr_pages(folio); +} + /* * Given the page we found in the page cache, return the page corresponding * to this index in the file From patchwork Mon Jan 18 17:01:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CEDAC433E0 for ; Mon, 18 Jan 2021 17:02:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 32F36222BB for ; Mon, 18 Jan 2021 17:02:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 32F36222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 23D4C6B00E3; Mon, 18 Jan 2021 12:02:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F09F6B00E5; Mon, 18 Jan 2021 12:02:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1080E6B00E7; Mon, 18 Jan 2021 12:02:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id DC9A46B00E3 for ; Mon, 18 Jan 2021 12:02:10 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9C5164DD8 for ; Mon, 18 Jan 2021 17:02:10 +0000 (UTC) X-FDA: 77719513620.08.lock34_2e0e4822754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 5F5621819E627 for ; Mon, 18 Jan 2021 17:02:10 +0000 (UTC) X-HE-Tag: lock34_2e0e4822754a X-Filterd-Recvd-Size: 5689 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6YK8dxrIi6WHXmqxTYcAlqgkXxY6OI1r+h3iSsQcrI0=; b=C3raeL3Kn0g4dP57UwlPf+W+w1 ASPS/ph0PPLk1lgQZSN/QUf9GAS7hOISsupaAtKqlnnUSnkdFPekYxYrvlvbhJdi0LAyJS2C96Ho0 CQR7vQqZvS9L7t5gO37j7uy/Giixv3ZY2PYPlcU/PPNd4QyHuRfa7DPlZvaGfO0w+l9l/6GcP7pwe 53OlfOQFz81F3x9OEhRNtacl6pHmBhipXMC7iOuC9xFTW/vWXjtUtX3GvVhhYte3EXUwfzAxp5SZj LvOKX7ERIP0GzFHjICjw052chJUNhQZOOHWh6CmHzdRwdxa3HGc5lVzxaqHXU7LksNxIHFDn+pSmT NyUJteVQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xum-00D7Iv-FV; Mon, 18 Jan 2021 17:02:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 10/27] mm/util: Add folio_mapping and folio_file_mapping Date: Mon, 18 Jan 2021 17:01:31 +0000 Message-Id: <20210118170148.3126186-11-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalent of page_mapping() and page_file_mapping(). Adjust page_file_mapping() and page_mapping_file() to use folios internally. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 23 +++++++++++++++-------- mm/swapfile.c | 6 +++--- mm/util.c | 20 ++++++++++---------- 3 files changed, 28 insertions(+), 21 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 380328930d6c..46cee44c0c68 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1586,17 +1586,25 @@ void page_address_init(void); extern void *page_rmapping(struct page *page); extern struct anon_vma *page_anon_vma(struct page *page); -extern struct address_space *page_mapping(struct page *page); +struct address_space *folio_mapping(struct folio *); +struct address_space *__folio_file_mapping(struct folio *); -extern struct address_space *__page_file_mapping(struct page *); +static inline struct address_space *page_mapping(struct page *page) +{ + return folio_mapping(page_folio(page)); +} -static inline -struct address_space *page_file_mapping(struct page *page) +static inline struct address_space *folio_file_mapping(struct folio *folio) { - if (unlikely(PageSwapCache(page))) - return __page_file_mapping(page); + if (unlikely(FolioSwapCache(folio))) + return __folio_file_mapping(folio); - return page->mapping; + return folio->page.mapping; +} + +static inline struct address_space *page_file_mapping(struct page *page) +{ + return folio_file_mapping(page_folio(page)); } extern pgoff_t __page_file_index(struct page *page); @@ -1613,7 +1621,6 @@ static inline pgoff_t page_index(struct page *page) } bool page_mapped(struct page *page); -struct address_space *page_mapping(struct page *page); struct address_space *page_mapping_file(struct page *page); /* diff --git a/mm/swapfile.c b/mm/swapfile.c index 9fffc5af29d1..ddb734fccfc3 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3551,11 +3551,11 @@ struct swap_info_struct *page_swap_info(struct page *page) /* * out-of-line __page_file_ methods to avoid include hell. */ -struct address_space *__page_file_mapping(struct page *page) +struct address_space *__folio_file_mapping(struct folio *folio) { - return page_swap_info(page)->swap_file->f_mapping; + return page_swap_info(&folio->page)->swap_file->f_mapping; } -EXPORT_SYMBOL_GPL(__page_file_mapping); +EXPORT_SYMBOL_GPL(__folio_file_mapping); pgoff_t __page_file_index(struct page *page) { diff --git a/mm/util.c b/mm/util.c index c37e24d5fa43..c052c39b9f1c 100644 --- a/mm/util.c +++ b/mm/util.c @@ -686,39 +686,39 @@ struct anon_vma *page_anon_vma(struct page *page) return __page_rmapping(page); } -struct address_space *page_mapping(struct page *page) +struct address_space *folio_mapping(struct folio *folio) { struct address_space *mapping; - page = compound_head(page); - /* This happens if someone calls flush_dcache_page on slab page */ - if (unlikely(PageSlab(page))) + if (unlikely(FolioSlab(folio))) return NULL; - if (unlikely(PageSwapCache(page))) { + if (unlikely(FolioSwapCache(folio))) { swp_entry_t entry; - entry.val = page_private(page); + entry.val = folio_private(folio); return swap_address_space(entry); } - mapping = page->mapping; + mapping = folio->page.mapping; if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } -EXPORT_SYMBOL(page_mapping); +EXPORT_SYMBOL(folio_mapping); /* * For file cache pages, return the address_space, otherwise return NULL */ struct address_space *page_mapping_file(struct page *page) { - if (unlikely(PageSwapCache(page))) + struct folio *folio = page_folio(page); + + if (unlikely(FolioSwapCache(folio))) return NULL; - return page_mapping(page); + return folio_mapping(folio); } /* Slow path of page_mapcount() for compound pages */ From patchwork Mon Jan 18 17:01:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07D1DC433DB for ; Mon, 18 Jan 2021 17:02:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AB3A6222BB for ; Mon, 18 Jan 2021 17:02:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB3A6222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D13096B00E8; Mon, 18 Jan 2021 12:02:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C9F806B00EA; Mon, 18 Jan 2021 12:02:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B65726B00EB; Mon, 18 Jan 2021 12:02:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 9FB346B00E8 for ; Mon, 18 Jan 2021 12:02:19 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5EE442493 for ; Mon, 18 Jan 2021 17:02:19 +0000 (UTC) X-FDA: 77719513998.26.clock66_0b063bd2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 85DB21804A314 for ; Mon, 18 Jan 2021 17:02:17 +0000 (UTC) X-HE-Tag: clock66_0b063bd2754a X-Filterd-Recvd-Size: 6096 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QtagCdqlQRSTNk83TE6iZzeshtghM/dQX9PoljD/GxI=; b=NHUggwmMfkx38kuG5SvJRxpVR6 FYsVmuQLHbSJVFSXVNFO8MLK+5pkUCBJZpaVi/8jESUN7vQ7Wq0c6MHQL8hQ2BCmcVWtP9PNI6bzz 95FsUNZtsAZ4F1G8fjg+rvNJyylih4YBWh10q1AeE097TLQwEfK5VWS70/xDCAWv6WhYrVUnfvrVI FUxEtk+APnGrR9NNyWD27mRadHobF0i/0wBo+aRtkwCXRy0fD0f/9XaEBVykTv698GiDaihlnmBvf VGK0O9yEKhKH6kYmr/Re1/400/oFkSGxwMgVjGeka1AJSqar1zGPZiXz4xShs03nmoo0uU8EZTteB 31bZL/3w==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuo-00D7J7-6s; Mon, 18 Jan 2021 17:02:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 11/27] mm/memcg: Add folio_memcg, lock_folio_memcg and unlock_folio_memcg Date: Mon, 18 Jan 2021 17:01:32 +0000 Message-Id: <20210118170148.3126186-12-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memcontrol code already assumes that page_memcg() will be called with a non-tail page, so make that more natural by wrapping it with a folio API. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 16 ++++++++++++++++ mm/memcontrol.c | 36 ++++++++++++++++++++++++------------ 2 files changed, 40 insertions(+), 12 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7a38a1517a05..89aaa22506e6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -383,6 +383,11 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + return page_memcg(&folio->page); +} + /* * page_memcg_rcu - locklessly get the memory cgroup associated with a page * @page: a pointer to the page struct @@ -869,8 +874,10 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); extern bool cgroup_memory_noswap; #endif +struct mem_cgroup *lock_folio_memcg(struct folio *folio); struct mem_cgroup *lock_page_memcg(struct page *page); void __unlock_page_memcg(struct mem_cgroup *memcg); +void unlock_folio_memcg(struct folio *folio); void unlock_page_memcg(struct page *page); /* @@ -1298,6 +1305,11 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } +static inline struct mem_cgroup *lock_folio_memcg(struct folio *folio) +{ + return NULL; +} + static inline struct mem_cgroup *lock_page_memcg(struct page *page) { return NULL; @@ -1307,6 +1319,10 @@ static inline void __unlock_page_memcg(struct mem_cgroup *memcg) { } +static inline void unlock_folio_memcg(struct folio *folio) +{ +} + static inline void unlock_page_memcg(struct page *page) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1b1ec0c1b6f8..d5ec868cd9f7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2140,19 +2140,18 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg) } /** - * lock_page_memcg - lock a page and memcg binding - * @page: the page + * lock_folio_memcg - lock a folio and memcg binding + * @folio: the folio * - * This function protects unlocked LRU pages from being moved to + * This function protects unlocked LRU folios from being moved to * another cgroup. * * It ensures lifetime of the returned memcg. Caller is responsible - * for the lifetime of the page; __unlock_page_memcg() is available - * when @page might get freed inside the locked section. + * for the lifetime of the folio; __unlock_folio_memcg() is available + * when @folio might get freed inside the locked section. */ -struct mem_cgroup *lock_page_memcg(struct page *page) +struct mem_cgroup *lock_folio_memcg(struct folio *folio) { - struct page *head = compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; unsigned long flags; @@ -2172,7 +2171,7 @@ struct mem_cgroup *lock_page_memcg(struct page *page) if (mem_cgroup_disabled()) return NULL; again: - memcg = page_memcg(head); + memcg = folio_memcg(folio); if (unlikely(!memcg)) return NULL; @@ -2186,7 +2185,7 @@ struct mem_cgroup *lock_page_memcg(struct page *page) return memcg; spin_lock_irqsave(&memcg->move_lock, flags); - if (memcg != page_memcg(head)) { + if (memcg != folio_memcg(folio)) { spin_unlock_irqrestore(&memcg->move_lock, flags); goto again; } @@ -2201,6 +2200,12 @@ struct mem_cgroup *lock_page_memcg(struct page *page) return memcg; } +EXPORT_SYMBOL(lock_folio_memcg); + +struct mem_cgroup *lock_page_memcg(struct page *page) +{ + return lock_folio_memcg(page_folio(page)); +} EXPORT_SYMBOL(lock_page_memcg); /** @@ -2223,15 +2228,22 @@ void __unlock_page_memcg(struct mem_cgroup *memcg) rcu_read_unlock(); } +/** + * unlock_folio_memcg - unlock a folio and memcg binding + * @folio: the folio + */ +void unlock_folio_memcg(struct folio *folio) +{ + __unlock_page_memcg(folio_memcg(folio)); +} + /** * unlock_page_memcg - unlock a page and memcg binding * @page: the page */ void unlock_page_memcg(struct page *page) { - struct page *head = compound_head(page); - - __unlock_page_memcg(page_memcg(head)); + unlock_folio_memcg(page_folio(page)); } EXPORT_SYMBOL(unlock_page_memcg); From patchwork Mon Jan 18 17:01:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39429C433DB for ; Mon, 18 Jan 2021 17:02:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC214222BB for ; Mon, 18 Jan 2021 17:02:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC214222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0976F6B00EE; Mon, 18 Jan 2021 12:02:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F39496B00F0; Mon, 18 Jan 2021 12:02:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E22AF6B00F2; Mon, 18 Jan 2021 12:02:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id C7FAA6B00EE for ; Mon, 18 Jan 2021 12:02:22 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 934F4181AF5C6 for ; Mon, 18 Jan 2021 17:02:22 +0000 (UTC) X-FDA: 77719514124.17.hat46_4a10a842754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 78561180D078C for ; Mon, 18 Jan 2021 17:02:22 +0000 (UTC) X-HE-Tag: hat46_4a10a842754a X-Filterd-Recvd-Size: 2501 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ML5D3QchsSfCvjh/6UQ/1m+q+OQV+91kWcgwnlcDp24=; b=Ry7JrSBd6iQQ4MvfXQoEVwAN9F TmQb4UXoz60oppMczsz01gcZMD9G6mJxmT+FTZB80pT25hJkv040mM2+wxVMd7gw30etQIcxHNR03 irEr7irhwLXUph+H3Mg8RrltI21HfrEBUDZXDkxwSfLo7d9/Om8UCwUxBGgES2ge5H3d1lH39+fwf tFV4NK9pnlMNbCzTJjh1Px+sOdI3AAqUUArIdcOUIiNYd2LzWhmZuaN5s+D6RFIBHy2m+AyWfWAII elL12cXopcQrJk/Ysqeosl6AdPMz9YK7S3sD78ueUSPnOKG3CbMLhT2JAaubEoOh8rII36K6QVwxS 4ZRW5vuQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuq-00D7K2-CU; Mon, 18 Jan 2021 17:02:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 12/27] mm/memcg: Add mem_cgroup_folio_lruvec Date: Mon, 18 Jan 2021 17:01:33 +0000 Message-Id: <20210118170148.3126186-13-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mem_cgroup_page_lruvec() already expects a head page, so this will add some typesafety once we can remove mem_cgroup_page_lruvec(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89aaa22506e6..ec7ecfc0e47b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1454,6 +1454,12 @@ static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) } #endif /* CONFIG_MEMCG */ +static inline struct lruvec *mem_cgroup_folio_lruvec(struct folio *folio, + struct pglist_data *pgdat) +{ + return mem_cgroup_page_lruvec(&folio->page, pgdat); +} + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); From patchwork Mon Jan 18 17:01:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 628D4C433E0 for ; Mon, 18 Jan 2021 17:02:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 128F1222BB for ; Mon, 18 Jan 2021 17:02:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 128F1222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D11276B00F3; Mon, 18 Jan 2021 12:02:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC3D16B00F6; Mon, 18 Jan 2021 12:02:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFED66B00F7; Mon, 18 Jan 2021 12:02:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id A3C966B00F3 for ; Mon, 18 Jan 2021 12:02:26 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 40964180ACEEB for ; Mon, 18 Jan 2021 17:02:26 +0000 (UTC) X-FDA: 77719514292.07.wing26_3b169502754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 1C0EA1803F9AD for ; Mon, 18 Jan 2021 17:02:26 +0000 (UTC) X-HE-Tag: wing26_3b169502754a X-Filterd-Recvd-Size: 4867 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bXX3+hQnf3NuQ+rlk9aKuExUlTCGLcKGpr5DsiUhGrU=; b=LVv2zxKh7SZews7ObE6Vwq2Xdu kmm7pN5XWJNd6j/UvRS0RZmom6v8vbyFnDLSRfdOtmhqCrJmJBIITGYSvsXEPyxfWycYoWQCDB8Be 9HLRvTXSm21Ont9wXs4kLTwxOdwK7DZy9Ve7XjVnYr3ikxo01+QlrNkq2WvvlxSMPoarL6wAZ5WOJ E37A6xwfNNgqBXG1whCs6DENufExUrnG8kV2CxGiH+eNC2RV9BvpIH7JW6SQ2KM6Sq65T2kBMqPZS qTAFrfATuj8UzQRBb25QcMhAzhse9Mwc+7kpDbpupgPKONIvF7XF2FHh7hj8gKs84cIxACAS/HUWU TPut5faw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xuz-00D7KR-9N; Mon, 18 Jan 2021 17:02:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 13/27] mm: Add unlock_folio Date: Mon, 18 Jan 2021 17:01:34 +0000 Message-Id: <20210118170148.3126186-14-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert unlock_page() to call unlock_folio(). By using a folio we avoid a call to compound_head(). This shortens the function from 39 bytes to 25 and removes 4 instructions on x86-64. Those instructions are currently pushed into each caller, but subsequent patches will convert many of the callers to operate on folios. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 16 +++++++++++++++- mm/filemap.c | 27 ++++++++++----------------- 2 files changed, 25 insertions(+), 18 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c27b74c63b5e..44675104008b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -623,7 +623,21 @@ extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); -extern void unlock_page(struct page *page); +void unlock_folio(struct folio *folio); + +/** + * unlock_page - Unlock a locked page. + * @page: The page. + * + * Unlocks the page and wakes up any thread sleeping on the page lock. + * + * Context: May be called from interrupt or process context. May not be + * called from NMI context. + */ +static inline void unlock_page(struct page *page) +{ + return unlock_folio(page_folio(page)); +} /* * Return true if the page was successfully locked diff --git a/mm/filemap.c b/mm/filemap.c index bb28dd6d9e22..31470b36ac89 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1438,29 +1438,22 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem #endif /** - * unlock_page - unlock a locked page - * @page: the page + * unlock_folio - Unlock a locked folio. + * @folio: The folio. * - * Unlocks the page and wakes up sleepers in wait_on_page_locked(). - * Also wakes sleepers in wait_on_page_writeback() because the wakeup - * mechanism between PageLocked pages and PageWriteback pages is shared. - * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep. + * Unlocks the folio and wakes up any thread sleeping on the page lock. * - * Note that this depends on PG_waiters being the sign bit in the byte - * that contains PG_locked - thus the BUILD_BUG_ON(). That allows us to - * clear the PG_locked bit and test PG_waiters at the same time fairly - * portably (architectures that do LL/SC can test any bit, while x86 can - * test the sign bit). + * Context: May be called from interrupt or process context. May not be + * called from NMI context. */ -void unlock_page(struct page *page) +void unlock_folio(struct folio *folio) { BUILD_BUG_ON(PG_waiters != 7); - page = compound_head(page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (clear_bit_unlock_is_negative_byte(PG_locked, &page->flags)) - wake_up_page_bit(page, PG_locked); + VM_BUG_ON_FOLIO(!FolioLocked(folio), folio); + if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio))) + wake_up_page_bit(&folio->page, PG_locked); } -EXPORT_SYMBOL(unlock_page); +EXPORT_SYMBOL(unlock_folio); /** * end_page_writeback - end writeback against a page From patchwork Mon Jan 18 17:01:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4666C433DB for ; Mon, 18 Jan 2021 17:02:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 527D4222BB for ; Mon, 18 Jan 2021 17:02:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 527D4222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7BB3C6B00F6; Mon, 18 Jan 2021 12:02:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76C766B00F8; Mon, 18 Jan 2021 12:02:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A85D6B00F9; Mon, 18 Jan 2021 12:02:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 570836B00F8 for ; Mon, 18 Jan 2021 12:02:37 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1A83B3652 for ; Mon, 18 Jan 2021 17:02:37 +0000 (UTC) X-FDA: 77719514754.22.cast07_1b05d3d2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id EEEED18038E60 for ; Mon, 18 Jan 2021 17:02:36 +0000 (UTC) X-HE-Tag: cast07_1b05d3d2754a X-Filterd-Recvd-Size: 6281 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OGxJBD+EGZ8LUoa8kIajxEh7bdJutIWgtVED3sb4XOc=; b=Jxh1C4kkAM388NqB+0T+q7yVDi ZN4q3vcsEmq6TfpEGzU/8bcI5nv2rDuVsejyG23ktGfe7OvwPmnHm+90AoItpyC8xHhObEnaVoFVt NBCPGr1E/jH8+zZeSralkjTMCEqOmk8gz04nEVYVLt9ndDMjkF2h1wIKmKx8JcdPdM3+ALypgbr4x iZpz5SwL8SgambrI8xMh8l14s91yRtVo/TVrPyvfcItvQlfk0NAA/Q2boOaeUd7ScBzMBSHv716LN 3BsCPoczw94xrFwunVwdlK7MjdBF53VW3Y/XDoxt0AApi0wkOv2E3wTKw2/io+rEKJ4g+dHYHAwvT EiTziwmQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xv3-00D7LY-5e; Mon, 18 Jan 2021 17:02:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 14/27] mm: Add lock_folio Date: Mon, 18 Jan 2021 17:01:35 +0000 Message-Id: <20210118170148.3126186-15-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page() but for use by callers who know they have a folio. Convert __lock_page() to be __lock_folio(). This saves one call to compound_head() per contended call to lock_page(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 21 +++++++++++++++------ mm/filemap.c | 29 +++++++++++++++-------------- 2 files changed, 30 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 44675104008b..03b67915b0f7 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -618,7 +618,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, return true; } -extern void __lock_page(struct page *page); +void __lock_folio(struct folio *folio); extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, @@ -639,13 +639,24 @@ static inline void unlock_page(struct page *page) return unlock_folio(page_folio(page)); } +static inline bool trylock_folio(struct folio *folio) +{ + return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio))); +} + /* * Return true if the page was successfully locked */ static inline int trylock_page(struct page *page) { - page = compound_head(page); - return (likely(!test_and_set_bit_lock(PG_locked, &page->flags))); + return trylock_folio(page_folio(page)); +} + +static inline void lock_folio(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + __lock_folio(folio); } /* @@ -653,9 +664,7 @@ static inline int trylock_page(struct page *page) */ static inline void lock_page(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - __lock_page(page); + lock_folio(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 31470b36ac89..167552fad5a6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1155,7 +1155,7 @@ static void wake_up_page(struct page *page, int bit) */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like - * __lock_page() waiting on then setting PG_locked. + * __lock_folio() waiting on then setting PG_locked. */ SHARED, /* Hold ref to page and check the bit when woken, like * wait_on_page_writeback() waiting on PG_writeback. @@ -1518,17 +1518,16 @@ void page_endio(struct page *page, bool is_write, int err) EXPORT_SYMBOL_GPL(page_endio); /** - * __lock_page - get a lock on the page, assuming we need to sleep to get it - * @__page: the page to lock + * __lock_folio - Get a lock on the folio, assuming we need to sleep to get it. + * @folio: The folio to lock */ -void __lock_page(struct page *__page) +void __lock_folio(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } -EXPORT_SYMBOL(__lock_page); +EXPORT_SYMBOL(__lock_folio); int __lock_page_killable(struct page *__page) { @@ -1582,10 +1581,10 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, return 0; } } else { - __lock_page(page); + __lock_folio(page_folio(page)); } - return 1; + return 1; } /** @@ -2763,7 +2762,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, struct file **fpin) { - if (trylock_page(page)) + struct folio *folio = page_folio(page); + + if (trylock_folio(folio)) return 1; /* @@ -2776,7 +2777,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(page)) { + if (__lock_page_killable(&folio->page)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals @@ -2788,11 +2789,11 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, return 0; } } else - __lock_page(page); + __lock_folio(folio); + return 1; } - /* * Synchronous readahead happens when we don't even find a page in the page * cache at all. We don't want to perform IO under the mmap sem, so if we have From patchwork Mon Jan 18 17:01:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6AEEC433E0 for ; Mon, 18 Jan 2021 17:02:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F132222BB for ; Mon, 18 Jan 2021 17:02:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F132222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5247D6B00F8; Mon, 18 Jan 2021 12:02:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 485696B00FB; Mon, 18 Jan 2021 12:02:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34BD56B00FC; Mon, 18 Jan 2021 12:02:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 1D7396B00FB for ; Mon, 18 Jan 2021 12:02:38 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CE1D42491 for ; Mon, 18 Jan 2021 17:02:37 +0000 (UTC) X-FDA: 77719514754.23.face39_4f16c8f2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id A604E37608 for ; Mon, 18 Jan 2021 17:02:37 +0000 (UTC) X-HE-Tag: face39_4f16c8f2754a X-Filterd-Recvd-Size: 5323 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jARN5G1znGLwfSj55+Daqu/+UcQBxuyDpa50QC0pHA4=; b=TelTRaIX5dLsHy2wHOo1Yge7T1 uQOyBAt7fjvJ588wTJqQfIwjCLPbPN/aoVqFJMJD/CFzxf+ClOtNDsX/mirVX9wdzpWj9xBd0NFQR NhOyttxEy87q402zm1sLXz+2XoG2gNPRRTWnrdMxervs00pws/xoZL2H8QtAK0TaudBzrbjvKJ2S9 8NmeFAz2Z82bSgEwxv0o2bR8TSSsktpKvWh7xnTs5lt/WCQsivMTkarGrG8Og0/DY3STwX+8BCToy 8Kl16B8SL9zGib8+DgHNWFl1LKUI6KQyRKSQpzJKpOyQbQXGNea7Qmv2YOTepoTQ23exhtkEM5Cr9 m7y2XHkw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xv7-00D7Lx-6E; Mon, 18 Jan 2021 17:02:26 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 15/27] mm: Add lock_folio_killable Date: Mon, 18 Jan 2021 17:01:36 +0000 Message-Id: <20210118170148.3126186-16-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page_killable() but for use by callers who know they have a folio. Convert __lock_page_killable() to be __lock_folio_killable(). This saves one call to compound_head() per contended call to lock_page_killable(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 15 ++++++++++----- mm/filemap.c | 17 +++++++++-------- 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 03b67915b0f7..5260ae7d9196 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -619,7 +619,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, } void __lock_folio(struct folio *folio); -extern int __lock_page_killable(struct page *page); +int __lock_folio_killable(struct folio *folio); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); @@ -667,6 +667,14 @@ static inline void lock_page(struct page *page) lock_folio(page_folio(page)); } +static inline int lock_folio_killable(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + return __lock_folio_killable(folio); + return 0; +} + /* * lock_page_killable is like lock_page but can be interrupted by fatal * signals. It returns 0 if it locked the page and -EINTR if it was @@ -674,10 +682,7 @@ static inline void lock_page(struct page *page) */ static inline int lock_page_killable(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - return __lock_page_killable(page); - return 0; + return lock_folio_killable(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 167552fad5a6..31b90b878eba 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1529,14 +1529,13 @@ void __lock_folio(struct folio *folio) } EXPORT_SYMBOL(__lock_folio); -int __lock_page_killable(struct page *__page) +int __lock_folio_killable(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, EXCLUSIVE); } -EXPORT_SYMBOL_GPL(__lock_page_killable); +EXPORT_SYMBOL_GPL(__lock_folio_killable); int __lock_page_async(struct page *page, struct wait_page_queue *wait) { @@ -1557,6 +1556,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait) int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio = page_folio(page); + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released @@ -1575,13 +1576,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, if (flags & FAULT_FLAG_KILLABLE) { int ret; - ret = __lock_page_killable(page); + ret = __lock_folio_killable(folio); if (ret) { mmap_read_unlock(mm); return 0; } } else { - __lock_folio(page_folio(page)); + __lock_folio(folio); } return 1; @@ -2777,7 +2778,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(&folio->page)) { + if (__lock_folio_killable(folio)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals From patchwork Mon Jan 18 17:01:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1DAAC433DB for ; Mon, 18 Jan 2021 17:02:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7B0C222C7E for ; Mon, 18 Jan 2021 17:02:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B0C222C7E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4AFFB6B00FB; Mon, 18 Jan 2021 12:02:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 411C06B00FD; Mon, 18 Jan 2021 12:02:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D7486B00FE; Mon, 18 Jan 2021 12:02:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 1501E6B00FB for ; Mon, 18 Jan 2021 12:02:41 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C4B40181AF5C6 for ; Mon, 18 Jan 2021 17:02:40 +0000 (UTC) X-FDA: 77719514880.06.legs01_4f10fed2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 79DC31003583E for ; Mon, 18 Jan 2021 17:02:40 +0000 (UTC) X-HE-Tag: legs01_4f10fed2754a X-Filterd-Recvd-Size: 5231 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:02:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Rst7vvQbDk2J8HAFzqRmEilF+OmTa2AVu5W9fc8h5jM=; b=DC8KqrAfhlxMJSvebGIfGEU9yj 06tmYxMqCVH4lBWqJC7UejllurrAWZtTTw1y+6cS2KyQCVj6XbJeOnChObnNoYLG7pBo6D0R8KjH/ lIFwYUJxTmcJqNwwR5Zky/6EVefuzC2uxa1X9fds5Ck/hjsjzDazTWyAIys1a9Ke3MtBD//7u9KxC gKHnmpLop6DCAyEkfTOkPXPXp01VNIyZHSi5XT9tMVnTFNgRSfw6in3xPowRQ4exB/nhd9jUPp+cm xOOHlegeSo/bdK0kprTaifwkDwO2fnr6eOBcWP9vDUGwmafyBr32kKPGugE06bByzfMhA9jffsd8r CCRbWVvQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XvH-00D7Me-5u; Mon, 18 Jan 2021 17:02:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 16/27] mm: Convert lock_page_async to lock_folio_async Date: Mon, 18 Jan 2021 17:01:37 +0000 Message-Id: <20210118170148.3126186-17-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the caller already has a folio, this saves a call to compound_head(). If not, the call to compound_head() is merely moved. Signed-off-by: Matthew Wilcox (Oracle) --- fs/io_uring.c | 2 +- include/linux/pagemap.h | 14 +++++++------- mm/filemap.c | 6 +++--- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 7c48b667954f..52f35e69467f 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3388,7 +3388,7 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } /* - * This is our waitqueue callback handler, registered through lock_page_async() + * This is our waitqueue callback handler, registered through lock_folio_async() * when we initially tried to do the IO with the iocb armed our waitqueue. * This gets called when the page is unlocked, and we generally expect that to * happen when the page IO is completed and the page is now uptodate. This will diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 5260ae7d9196..44fa7d974aa4 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -620,7 +620,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __lock_folio(struct folio *folio); int __lock_folio_killable(struct folio *folio); -extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); +int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); void unlock_folio(struct folio *folio); @@ -686,18 +686,18 @@ static inline int lock_page_killable(struct page *page) } /* - * lock_page_async - Lock the page, unless this would block. If the page - * is already locked, then queue a callback when the page becomes unlocked. + * lock_folio_async - Lock the folio, unless this would block. If the folio + * is already locked, then queue a callback when the folio becomes unlocked. * This callback can then retry the operation. * - * Returns 0 if the page is locked successfully, or -EIOCBQUEUED if the page + * Returns 0 if the folio is locked successfully, or -EIOCBQUEUED if the folio * was already locked and the callback defined in 'wait' was queued. */ -static inline int lock_page_async(struct page *page, +static inline int lock_folio_async(struct folio *folio, struct wait_page_queue *wait) { - if (!trylock_page(page)) - return __lock_page_async(page, wait); + if (!trylock_folio(folio)) + return __lock_folio_async(folio, wait); return 0; } diff --git a/mm/filemap.c b/mm/filemap.c index 31b90b878eba..95015bc57bb7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1537,9 +1537,9 @@ int __lock_folio_killable(struct folio *folio) } EXPORT_SYMBOL_GPL(__lock_folio_killable); -int __lock_page_async(struct page *page, struct wait_page_queue *wait) +int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) { - return __wait_on_page_locked_async(page, wait, true); + return __wait_on_page_locked_async(&folio->page, wait, true); } /* @@ -2177,7 +2177,7 @@ static void shrink_readahead_size_eio(struct file_ra_state *ra) static int lock_page_for_iocb(struct kiocb *iocb, struct page *page) { if (iocb->ki_flags & IOCB_WAITQ) - return lock_page_async(page, iocb->ki_waitq); + return lock_folio_async(page_folio(page), iocb->ki_waitq); else if (iocb->ki_flags & IOCB_NOWAIT) return trylock_page(page) ? 0 : -EAGAIN; else From patchwork Mon Jan 18 17:01:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14F3DC433E6 for ; Mon, 18 Jan 2021 17:03:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1ACE222BB for ; Mon, 18 Jan 2021 17:03:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1ACE222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E9756B015B; Mon, 18 Jan 2021 12:03:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 399386B015C; Mon, 18 Jan 2021 12:03:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28B0F6B0160; Mon, 18 Jan 2021 12:03:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EFE956B015C for ; Mon, 18 Jan 2021 12:03:20 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B82FC1DE9 for ; Mon, 18 Jan 2021 17:03:20 +0000 (UTC) X-FDA: 77719516560.15.year71_1c076372754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 895A11814B0D6 for ; Mon, 18 Jan 2021 17:03:20 +0000 (UTC) X-HE-Tag: year71_1c076372754a X-Filterd-Recvd-Size: 3289 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+jM+Ji5ilcdBl6ok4GHMueETlpEeNTh2fPWleKHuK0I=; b=Q7siTO+kk9YmPISJopwDDjO/p9 IBLCo461dZLxLXOmbLlhndotjf4rFYGN+SC2vKnnkoXZ1dRveFXlsI1bEaH+2olNlapxkwdGN5h5R wj5zLl/eW7HlnLT+z+nPiM3BO4v3F5M5UMn6ewod9rxUfY4tQXyRnPs+58fSGXLiXKM4KyBaRWOFI A+I/oSklVRWsLH7jQAXst7lon0j+tiK8EU4L/7SJ9l7pAKfBokXbA+SJ2MAqm9a9DwIpA5nm+z79e zUFSP9lerdxp6hRAPIQWcv9Irr4hZAPDbwzhlxYT65QUvbWwDHRFlN35FAru8btGJuXgg3Tv018g6 q8H8Ea8w==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XvI-00D7NI-91; Mon, 18 Jan 2021 17:02:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 17/27] mm/filemap: Convert lock_page_for_iocb to lock_folio_for_iocb Date: Mon, 18 Jan 2021 17:01:38 +0000 Message-Id: <20210118170148.3126186-18-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The callers will eventually all have a folio, but for now do the conversion at the call sites. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 95015bc57bb7..648f78577ab7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2174,14 +2174,14 @@ static void shrink_readahead_size_eio(struct file_ra_state *ra) ra->ra_pages /= 4; } -static int lock_page_for_iocb(struct kiocb *iocb, struct page *page) +static int lock_folio_for_iocb(struct kiocb *iocb, struct folio *folio) { if (iocb->ki_flags & IOCB_WAITQ) - return lock_folio_async(page_folio(page), iocb->ki_waitq); + return lock_folio_async(folio, iocb->ki_waitq); else if (iocb->ki_flags & IOCB_NOWAIT) - return trylock_page(page) ? 0 : -EAGAIN; + return trylock_folio(folio) ? 0 : -EAGAIN; else - return lock_page_killable(page); + return lock_folio_killable(folio); } static struct page * @@ -2214,7 +2214,7 @@ generic_file_buffered_read_readpage(struct kiocb *iocb, } if (!PageUptodate(page)) { - error = lock_page_for_iocb(iocb, page); + error = lock_folio_for_iocb(iocb, page_folio(page)); if (unlikely(error)) { put_page(page); return ERR_PTR(error); @@ -2287,7 +2287,7 @@ generic_file_buffered_read_pagenotuptodate(struct kiocb *iocb, page_not_up_to_date: /* Get exclusive access to the page ... */ - error = lock_page_for_iocb(iocb, page); + error = lock_folio_for_iocb(iocb, page_folio(page)); if (unlikely(error)) { put_page(page); return ERR_PTR(error); From patchwork Mon Jan 18 17:01:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8EA1C433DB for ; Mon, 18 Jan 2021 17:03:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 578FB222BB for ; Mon, 18 Jan 2021 17:03:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 578FB222BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D10936B0103; Mon, 18 Jan 2021 12:03:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC0D16B015B; Mon, 18 Jan 2021 12:03:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD6676B015C; Mon, 18 Jan 2021 12:03:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id A7C816B0103 for ; Mon, 18 Jan 2021 12:03:20 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5B0D03642 for ; Mon, 18 Jan 2021 17:03:20 +0000 (UTC) X-FDA: 77719516560.12.tax57_540adc72754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 33F9C180078A0 for ; Mon, 18 Jan 2021 17:03:20 +0000 (UTC) X-HE-Tag: tax57_540adc72754a X-Filterd-Recvd-Size: 4029 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XYfT3Eibv2mij2dwdpauz6iOhPLmXhgJe381ycV9MHM=; b=HTw5XSDxVNLp17ewBpi3x/wlyU 59OEvE+ztbRz0pz0yEbv3VMWcu289ZuznLzqrszYwY7WYMUDEfOo8lDGbX1ZsGq9ky25QkeXHXcOL Zj9gu4RQ+hc7IEFd/7x9YAb2kwO7fGMtvsDPZCUGedlVX6+ywqUIFz06bYjiPWAYG+2s3KnNcQyqz BkBDFtDPvInn2XzPVMocXv+fBx6i9a5Aaq230SH/azVRbo3Fa7gaLjJ+WX6DGh3jDL8hNrOLf9gza ZHoG/fmajBbAKsE+jxPW4Vu5VwibIGxajOIvlh0OY63rhW7m3CSoPVN64fdljcSrrr5aGnmxoGeVh zLK3QAGw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XvL-00D7NY-Fj; Mon, 18 Jan 2021 17:02:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 18/27] mm/filemap: Convert wait_on_page_locked_async to wait_on_folio_locked_async Date: Mon, 18 Jan 2021 17:01:39 +0000 Message-Id: <20210118170148.3126186-19-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves a few calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 648f78577ab7..e997f4424ed9 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1338,22 +1338,22 @@ int wait_on_page_bit_killable(struct page *page, int bit_nr) } EXPORT_SYMBOL(wait_on_page_bit_killable); -static int __wait_on_page_locked_async(struct page *page, +static int __wait_on_folio_locked_async(struct folio *folio, struct wait_page_queue *wait, bool set) { - struct wait_queue_head *q = page_waitqueue(page); + struct wait_queue_head *q = page_waitqueue(&folio->page); int ret = 0; - wait->page = page; + wait->page = &folio->page; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); __add_wait_queue_entry_tail(q, &wait->wait); - SetPageWaiters(page); + SetFolioWaiters(folio); if (set) - ret = !trylock_page(page); + ret = !trylock_folio(folio); else - ret = PageLocked(page); + ret = FolioLocked(folio); /* * If we were successful now, we know we're still on the * waitqueue as we're still under the lock. This means it's @@ -1368,12 +1368,12 @@ static int __wait_on_page_locked_async(struct page *page, return ret; } -static int wait_on_page_locked_async(struct page *page, +static int wait_on_folio_locked_async(struct folio *folio, struct wait_page_queue *wait) { - if (!PageLocked(page)) + if (!FolioLocked(folio)) return 0; - return __wait_on_page_locked_async(compound_head(page), wait, false); + return __wait_on_folio_locked_async(folio, wait, false); } /** @@ -1539,7 +1539,7 @@ EXPORT_SYMBOL_GPL(__lock_folio_killable); int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) { - return __wait_on_page_locked_async(&folio->page, wait, true); + return __wait_on_folio_locked_async(folio, wait, true); } /* @@ -2256,7 +2256,7 @@ generic_file_buffered_read_pagenotuptodate(struct kiocb *iocb, * serialisations and why it's safe. */ if (iocb->ki_flags & IOCB_WAITQ) { - error = wait_on_page_locked_async(page, + error = wait_on_folio_locked_async(page_folio(page), iocb->ki_waitq); } else { error = wait_on_page_locked_killable(page); From patchwork Mon Jan 18 17:01:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE6EC433DB for ; Mon, 18 Jan 2021 17:03:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C48422C9D for ; Mon, 18 Jan 2021 17:03:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C48422C9D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12ED86B015C; Mon, 18 Jan 2021 12:03:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 03E516B0162; Mon, 18 Jan 2021 12:03:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBCBA6B0164; Mon, 18 Jan 2021 12:03:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id D742A6B015C for ; Mon, 18 Jan 2021 12:03:24 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9B928181AEF32 for ; Mon, 18 Jan 2021 17:03:24 +0000 (UTC) X-FDA: 77719516728.19.voice36_5f0d9202754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 4529A1ACC27 for ; Mon, 18 Jan 2021 17:03:24 +0000 (UTC) X-HE-Tag: voice36_5f0d9202754a X-Filterd-Recvd-Size: 4818 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uqJ8K0YMB9W8U/LhIs0PpD/u/oUu+JAwDIVdGmLFoDY=; b=NfKyfgzBwEydTE3gXbtb31IXe5 LXpuRBDgfzp/cyUX0ia9X9rsq3YOfK8YaIaszR2N0Y8VVyiF9JTd5Yhpf3+Qml05lpji5mnEZn3aa I2m3CivFy9kOANoQLlEG8+IcB1KHAljtYXnf6hsmaVD+6Cr//zJYPT4gcg9WmBn/HrN3xaciWaJjM d0U14EgHXlsakKsu2FzPsKZ4+LkrO5KvraCVLP5TDZhJ9c3HbP1LlXX6iQ3kLmsRe2Xas/6mUvCEC c/VIMrlhBY7WE1+6fc5lUr+R4xOKOFVxjTj63vPlv3S3sYeNNeegiZRLoBRI/qdMnl6XbacwMLnOE HjxpRIvA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XvM-00D7No-T1; Mon, 18 Jan 2021 17:02:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 19/27] mm/filemap: Convert end_page_writeback to end_folio_writeback Date: Mon, 18 Jan 2021 17:01:40 +0000 Message-Id: <20210118170148.3126186-20-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a wrapper function for users that are not yet converted to folios. With a distro config, this function shrinks from 213 bytes to 105 bytes due to elimination of repeated calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 6 +++++- mm/filemap.c | 30 +++++++++++++++--------------- 2 files changed, 20 insertions(+), 16 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 44fa7d974aa4..7a79e159307c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -745,7 +745,11 @@ static inline int wait_on_page_locked_killable(struct page *page) extern void put_and_wait_on_page_locked(struct page *page); void wait_on_page_writeback(struct page *page); -extern void end_page_writeback(struct page *page); +void end_folio_writeback(struct folio *folio); +static inline void end_page_writeback(struct page *page) +{ + return end_folio_writeback(page_folio(page)); +} void wait_for_stable_page(struct page *page); void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/filemap.c b/mm/filemap.c index e997f4424ed9..952457071630 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1143,11 +1143,11 @@ static void wake_up_page_bit(struct page *page, int bit_nr) spin_unlock_irqrestore(&q->lock, flags); } -static void wake_up_page(struct page *page, int bit) +static void wake_up_folio(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!FolioWaiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } /* @@ -1456,10 +1456,10 @@ void unlock_folio(struct folio *folio) EXPORT_SYMBOL(unlock_folio); /** - * end_page_writeback - end writeback against a page - * @page: the page + * end_folio_writeback - End writeback against a page. + * @folio: The page. */ -void end_page_writeback(struct page *page) +void end_folio_writeback(struct folio *folio) { /* * TestClearPageReclaim could be used here but it is an atomic @@ -1468,26 +1468,26 @@ void end_page_writeback(struct page *page) * justify taking an atomic operation penalty at the end of * ever page writeback. */ - if (PageReclaim(page)) { - ClearPageReclaim(page); - rotate_reclaimable_page(page); + if (FolioReclaim(folio)) { + ClearFolioReclaim(folio); + rotate_reclaimable_page(&folio->page); } /* * Writeback does not hold a page reference of its own, relying * on truncation to wait for the clearing of PG_writeback. * But here we must make sure that the page is not freed and - * reused before the wake_up_page(). + * reused before the wake_up_folio(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + get_folio(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + wake_up_folio(folio, PG_writeback); + put_folio(folio); } -EXPORT_SYMBOL(end_page_writeback); +EXPORT_SYMBOL(end_folio_writeback); /* * After completing I/O on a page, call this routine to update the page From patchwork Mon Jan 18 17:01:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01388C433E6 for ; Mon, 18 Jan 2021 17:03:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 93F8E22C9C for ; Mon, 18 Jan 2021 17:03:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93F8E22C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2AFE96B024F; Mon, 18 Jan 2021 12:03:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23BCC8D0018; Mon, 18 Jan 2021 12:03:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B35E6B0257; Mon, 18 Jan 2021 12:03:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id EAF276B024F for ; Mon, 18 Jan 2021 12:03:47 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AC130181AF5C7 for ; Mon, 18 Jan 2021 17:03:47 +0000 (UTC) X-FDA: 77719517694.25.crack41_0d0d3e52754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id ED2D81804E3AF for ; Mon, 18 Jan 2021 17:03:39 +0000 (UTC) X-HE-Tag: crack41_0d0d3e52754a X-Filterd-Recvd-Size: 10502 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oKL/0AT1Nju7i0tcqHvT0oZa3hDLpZ7KrfoZxjjiYIw=; b=kWfJepuJxOa/1cruoHb0LdtDkz dwMpHh18tMci0Acgcomcsy/u0vqcknsHxpZzT5IJA6zi6ktj3pt6Dosyb/r3fg6ZZi2sJ2Zi7bYxt r8mwYWlJ9i6yHzt7rP4L+d+UFqDy4UDh6c+5aof+7ZMPb72w5lhqh52YwnCKYh/0tMTWgtVKop2U8 h4UeFpwLv9jU67PuRIKGRFAcqIxbibcHVvMjS8hODdG/PWCrSIDBWGUpO50LmTloLL8uHEteYThxM oZoode8AGbscIzfe456CkFWMYAy6F+Eve4slkPS5z5XNLG9PE06LV9twr3qCCXgNDTYgy8tzmJlUb yODOsHZA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xvs-00D7Tr-BJ; Mon, 18 Jan 2021 17:03:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 20/27] mm: Convert wait_on_page_bit to wait_on_folio_bit Date: Mon, 18 Jan 2021 17:01:41 +0000 Message-Id: <20210118170148.3126186-21-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We must deal with folios here otherwise we'll get the wrong waitqueue and fail to receive wakeups. Signed-off-by: Matthew Wilcox (Oracle) --- fs/afs/write.c | 2 +- include/linux/pagemap.h | 14 ++++++----- mm/filemap.c | 54 ++++++++++++++++++----------------------- mm/page-writeback.c | 7 +++--- 4 files changed, 37 insertions(+), 40 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index c9195fc67fd8..b58e7a69a464 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -852,7 +852,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) #endif if (PageWriteback(vmf->page) && - wait_on_page_bit_killable(vmf->page, PG_writeback) < 0) + wait_on_folio_bit_killable(page_folio(vmf->page), PG_writeback) < 0) return VM_FAULT_RETRY; if (lock_page_killable(vmf->page) < 0) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7a79e159307c..685f1b394629 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -719,8 +719,8 @@ static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm, * This is exported only for wait_on_page_locked/wait_on_page_writeback, etc., * and should not be used directly. */ -extern void wait_on_page_bit(struct page *page, int bit_nr); -extern int wait_on_page_bit_killable(struct page *page, int bit_nr); +extern void wait_on_folio_bit(struct folio *folio, int bit_nr); +extern int wait_on_folio_bit_killable(struct folio *folio, int bit_nr); /* * Wait for a page to be unlocked. @@ -731,15 +731,17 @@ extern int wait_on_page_bit_killable(struct page *page, int bit_nr); */ static inline void wait_on_page_locked(struct page *page) { - if (PageLocked(page)) - wait_on_page_bit(compound_head(page), PG_locked); + struct folio *folio = page_folio(page); + if (FolioLocked(folio)) + wait_on_folio_bit(folio, PG_locked); } static inline int wait_on_page_locked_killable(struct page *page) { - if (!PageLocked(page)) + struct folio *folio = page_folio(page); + if (!FolioLocked(folio)) return 0; - return wait_on_page_bit_killable(compound_head(page), PG_locked); + return wait_on_folio_bit_killable(folio, PG_locked); } extern void put_and_wait_on_page_locked(struct page *page); diff --git a/mm/filemap.c b/mm/filemap.c index 952457071630..7670e0c7dd97 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1070,7 +1070,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, * * So update the flags atomically, and wake up the waiter * afterwards to avoid any races. This store-release pairs - * with the load-acquire in wait_on_page_bit_common(). + * with the load-acquire in wait_on_folio_bit_common(). */ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN); wake_up_state(wait->private, mode); @@ -1151,7 +1151,7 @@ static void wake_up_folio(struct folio *folio, int bit) } /* - * A choice of three behaviors for wait_on_page_bit_common(): + * A choice of three behaviors for wait_on_folio_bit_common(): */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like @@ -1185,9 +1185,10 @@ static inline bool trylock_page_bit_common(struct page *page, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; -static inline int wait_on_page_bit_common(wait_queue_head_t *q, - struct page *page, int bit_nr, int state, enum behavior behavior) +static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, + int state, enum behavior behavior) { + wait_queue_head_t *q = page_waitqueue(&folio->page); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1196,8 +1197,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, unsigned long pflags; if (bit_nr == PG_locked && - !PageUptodate(page) && PageWorkingset(page)) { - if (!PageSwapBacked(page)) { + !FolioUptodate(folio) && FolioWorkingset(folio)) { + if (!FolioSwapBacked(folio)) { delayacct_thrashing_start(); delayacct = true; } @@ -1207,7 +1208,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, init_wait(wait); wait->func = wake_page_function; - wait_page.page = page; + wait_page.page = &folio->page; wait_page.bit_nr = bit_nr; repeat: @@ -1222,7 +1223,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * Do one last check whether we can get the * page bit synchronously. * - * Do the SetPageWaiters() marking before that + * Do the SetFolioWaiters() marking before that * to let any waker we _just_ missed know they * need to wake us up (otherwise they'll never * even go to the slow case that looks at the @@ -1233,8 +1234,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * lock to avoid races. */ spin_lock_irq(&q->lock); - SetPageWaiters(page); - if (!trylock_page_bit_common(page, bit_nr, wait)) + SetFolioWaiters(folio); + if (!trylock_page_bit_common(&folio->page, bit_nr, wait)) __add_wait_queue_entry_tail(q, wait); spin_unlock_irq(&q->lock); @@ -1244,10 +1245,10 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * see whether the page bit testing has already * been done by the wake function. * - * We can drop our reference to the page. + * We can drop our reference to the folio. */ if (behavior == DROP) - put_page(page); + put_folio(folio); /* * Note that until the "finish_wait()", or until @@ -1284,7 +1285,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * * And if that fails, we'll have to retry this all. */ - if (unlikely(test_and_set_bit(bit_nr, &page->flags))) + if (unlikely(test_and_set_bit(bit_nr, folio_flags(folio)))) goto repeat; wait->flags |= WQ_FLAG_DONE; @@ -1324,19 +1325,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } -void wait_on_page_bit(struct page *page, int bit_nr) +void wait_on_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); + wait_on_folio_bit_common(folio, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit); +EXPORT_SYMBOL(wait_on_folio_bit); -int wait_on_page_bit_killable(struct page *page, int bit_nr) +int wait_on_folio_bit_killable(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); + return wait_on_folio_bit_common(folio, bit_nr, TASK_KILLABLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit_killable); +EXPORT_SYMBOL(wait_on_folio_bit_killable); static int __wait_on_folio_locked_async(struct folio *folio, struct wait_page_queue *wait, bool set) @@ -1388,11 +1387,8 @@ static int wait_on_folio_locked_async(struct folio *folio, */ void put_and_wait_on_page_locked(struct page *page) { - wait_queue_head_t *q; - - page = compound_head(page); - q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP); + wait_on_folio_bit_common(page_folio(page), PG_locked, + TASK_UNINTERRUPTIBLE, DROP); } /** @@ -1523,16 +1519,14 @@ EXPORT_SYMBOL_GPL(page_endio); */ void __lock_folio(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_on_folio_bit_common(folio, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } EXPORT_SYMBOL(__lock_folio); int __lock_folio_killable(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, + return wait_on_folio_bit_common(folio, PG_locked, TASK_KILLABLE, EXCLUSIVE); } EXPORT_SYMBOL_GPL(__lock_folio_killable); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index eb34d204d4ee..51b4326f0aaa 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2826,9 +2826,10 @@ EXPORT_SYMBOL(__test_set_page_writeback); */ void wait_on_page_writeback(struct page *page) { - while (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - wait_on_page_bit(page, PG_writeback); + struct folio *folio = page_folio(page); + while (FolioWriteback(folio)) { + trace_wait_on_page_writeback(page, folio_mapping(folio)); + wait_on_folio_bit(folio, PG_writeback); } } EXPORT_SYMBOL_GPL(wait_on_page_writeback); From patchwork Mon Jan 18 17:01:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49643C433E0 for ; Mon, 18 Jan 2021 17:03:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB8FB22C9C for ; Mon, 18 Jan 2021 17:03:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB8FB22C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 595996B0258; Mon, 18 Jan 2021 12:03:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5456E6B025A; Mon, 18 Jan 2021 12:03:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BFD16B025B; Mon, 18 Jan 2021 12:03:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 2094F6B0258 for ; Mon, 18 Jan 2021 12:03:51 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DE428180ACC20 for ; Mon, 18 Jan 2021 17:03:50 +0000 (UTC) X-FDA: 77719517820.20.voice61_5603cb32754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 21E91180BF32E for ; Mon, 18 Jan 2021 17:03:31 +0000 (UTC) X-HE-Tag: voice61_5603cb32754a X-Filterd-Recvd-Size: 4833 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=29ihG3xZAVE81H4LadxPn1TMPDKRCe/1A1Z2RRLt+j4=; b=ASYVqcvM/jSp/unq2q+YR91LRN RyUrJa7CPtd29B5q9OqfSA/7c2RekdkQjVR8Y/fgZ7MCzLZ9ddw+II8hkhV+o6tlTcSpX5AxMU0B/ YcyF1y+4csi9RMVPl0fE5nwxu2DfVVKnQr8idWkOAeyW7CVGNI/9Ia/jD9WPq2T2NSGwgOeVMpqgK +rAhNyoZ++uaYKx8CeSnqeHiooEb71cgcvyxwUlVNHDHsefRKY/Ion9oHDRCNE49vtn+9oWNPfnm/ 5hrZOV4j7YQsvfcx7Hol1pWcEbOsbCmcOp6RsGL5bHXwRGZFjiPp+RFD+jyrl9KVXiH6of5pg+lu4 FKoOw7oA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xvz-00D7VL-4Z; Mon, 18 Jan 2021 17:03:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 21/27] mm: Add wait_for_stable_folio and wait_on_folio_writeback Date: Mon, 18 Jan 2021 17:01:42 +0000 Message-Id: <20210118170148.3126186-22-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add compatibility wrappers for code which has not yet been converted to use folios. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 12 ++++++++++-- mm/page-writeback.c | 27 +++++++++++++-------------- 2 files changed, 23 insertions(+), 16 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 685f1b394629..619bfc6ea1ff 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -746,13 +746,21 @@ static inline int wait_on_page_locked_killable(struct page *page) extern void put_and_wait_on_page_locked(struct page *page); -void wait_on_page_writeback(struct page *page); +void wait_on_folio_writeback(struct folio *folio); +static inline void wait_on_page_writeback(struct page *page) +{ + return wait_on_folio_writeback(page_folio(page)); +} void end_folio_writeback(struct folio *folio); static inline void end_page_writeback(struct page *page) { return end_folio_writeback(page_folio(page)); } -void wait_for_stable_page(struct page *page); +void wait_for_stable_folio(struct folio *folio); +static inline void wait_for_stable_page(struct page *page) +{ + return wait_for_stable_folio(page_folio(page)); +} void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 51b4326f0aaa..908fc7f60ae7 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2822,30 +2822,29 @@ int __test_set_page_writeback(struct page *page, bool keep_write) EXPORT_SYMBOL(__test_set_page_writeback); /* - * Wait for a page to complete writeback + * Wait for a folio to complete writeback */ -void wait_on_page_writeback(struct page *page) +void wait_on_folio_writeback(struct folio *folio) { - struct folio *folio = page_folio(page); while (FolioWriteback(folio)) { - trace_wait_on_page_writeback(page, folio_mapping(folio)); + trace_wait_on_page_writeback(&folio->page, + folio_mapping(folio)); wait_on_folio_bit(folio, PG_writeback); } } -EXPORT_SYMBOL_GPL(wait_on_page_writeback); +EXPORT_SYMBOL_GPL(wait_on_folio_writeback); /** - * wait_for_stable_page() - wait for writeback to finish, if necessary. - * @page: The page to wait on. + * wait_for_stable_folio() - wait for writeback to finish, if necessary. + * @folio: The folio to wait on. * - * This function determines if the given page is related to a backing device - * that requires page contents to be held stable during writeback. If so, then + * This function determines if the given folio is related to a backing device + * that requires folio contents to be held stable during writeback. If so, then * it will wait for any pending writeback to complete. */ -void wait_for_stable_page(struct page *page) +void wait_for_stable_folio(struct folio *folio) { - page = thp_head(page); - if (page->mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) - wait_on_page_writeback(page); + if (folio->page.mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) + wait_on_folio_writeback(folio); } -EXPORT_SYMBOL_GPL(wait_for_stable_page); +EXPORT_SYMBOL_GPL(wait_for_stable_folio); From patchwork Mon Jan 18 17:01:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD1FCC433DB for ; Mon, 18 Jan 2021 17:03:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8FD2822C9C for ; Mon, 18 Jan 2021 17:03:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8FD2822C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19D656B020B; Mon, 18 Jan 2021 12:03:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 14E6D6B024C; Mon, 18 Jan 2021 12:03:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 064128D0018; Mon, 18 Jan 2021 12:03:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id E44946B020B for ; Mon, 18 Jan 2021 12:03:42 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B05088249980 for ; Mon, 18 Jan 2021 17:03:42 +0000 (UTC) X-FDA: 77719517484.27.view06_340bd232754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 7C8103D677 for ; Mon, 18 Jan 2021 17:03:41 +0000 (UTC) X-HE-Tag: view06_340bd232754a X-Filterd-Recvd-Size: 3138 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Kylbs0g4ewc+jvtnrfyqm6ClOMM1NzhqiyDdrSqfr0c=; b=hc15vIjFAuuafy9VINn06pE1JT HOz8GsraikGCaeI5c1i5jbMbFus/8BfOl6qf5EiSNFP3H8vdGnwrCgr7u8x4h4IZTLMs50cYhHk1X NH9D2UoHBj3Zb5dDqfgSwvnv5cM4Zhy+c7dJhC0ue0KBxzAVC7WECDSf1OhOxHExAClXWryyvX3Mt VYjG3JPXJ+JjpjNfWhndj2D1JAr0IAIJuSStXdl95Ev0Q4elrG92TLInqEbxhGV8Lc27LdOG79k93 FRLRNKe2gqBaPD84jOrEmympzYM/yvgnC/FlhpoP6VEmsDvzVOZX+694hwcAXv/4MS6G7iztWgh4b s4tCcWNA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xw0-00D7Vn-TS; Mon, 18 Jan 2021 17:03:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 22/27] mm: Add wait_on_folio_locked & wait_on_folio_locked_killable Date: Mon, 18 Jan 2021 17:01:43 +0000 Message-Id: <20210118170148.3126186-23-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn wait_on_page_locked() and wait_on_page_locked_killable() into wrappers. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 619bfc6ea1ff..d28b53f91275 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -729,21 +729,29 @@ extern int wait_on_folio_bit_killable(struct folio *folio, int bit_nr); * ie with increased "page->count" so that the page won't * go away during the wait.. */ -static inline void wait_on_page_locked(struct page *page) +static inline void wait_on_folio_locked(struct folio *folio) { - struct folio *folio = page_folio(page); if (FolioLocked(folio)) wait_on_folio_bit(folio, PG_locked); } -static inline int wait_on_page_locked_killable(struct page *page) +static inline int wait_on_folio_locked_killable(struct folio *folio) { - struct folio *folio = page_folio(page); if (!FolioLocked(folio)) return 0; return wait_on_folio_bit_killable(folio, PG_locked); } +static inline void wait_on_page_locked(struct page *page) +{ + wait_on_folio_locked(page_folio(page)); +} + +static inline int wait_on_page_locked_killable(struct page *page) +{ + return wait_on_folio_locked_killable(page_folio(page)); +} + extern void put_and_wait_on_page_locked(struct page *page); void wait_on_folio_writeback(struct folio *folio); From patchwork Mon Jan 18 17:01:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77BFBC433DB for ; Mon, 18 Jan 2021 17:03:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1EAB522C9C for ; Mon, 18 Jan 2021 17:03:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1EAB522C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 106146B0257; Mon, 18 Jan 2021 12:03:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 015E06B0258; Mon, 18 Jan 2021 12:03:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6FC86B0259; Mon, 18 Jan 2021 12:03:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D0C866B0257 for ; Mon, 18 Jan 2021 12:03:49 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 931E3181AF5C7 for ; Mon, 18 Jan 2021 17:03:49 +0000 (UTC) X-FDA: 77719517778.03.heart97_0b0b8542754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 76F0728A4EB for ; Mon, 18 Jan 2021 17:03:49 +0000 (UTC) X-HE-Tag: heart97_0b0b8542754a X-Filterd-Recvd-Size: 7920 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TDmagHNYCl1BePgBbMSzXnIOMLvyoVF9AckYemLXuck=; b=cLjmwitnqXsaSJJ8oyqzM1NAGL /xumc0ZRp62Neq84Jsez9/gQuANk7HEztu6sW3AtbJqmjOOQtJNO5edKI0GkbDg6KRn5gEZEpssbs SQj+/w7lA9oUOVZ5mN6hneXAJ7a37yaMKbE9IidKIEb6gCi7BSHkKgoh30jJcn0V5dpgo6NXlWa40 KLduGjjcMWhSmcQjPJx6xiBEI+6JxzbVFGyEIVpOoYwPbf2hT2z58f/M2KQ036u02DtXd8UULtCHt j+ooijziQCGWLnQ4reQy5d2LmEEPqYcO3e2826wh4WMd7YBLouTmqoOgyUBeuOXj8T3QXVm9n1uuX gyUiKq9Q==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1Xw8-00D7WS-Tx; Mon, 18 Jan 2021 17:03:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 23/27] mm: Convert lock_page_or_retry to lock_folio_or_retry Date: Mon, 18 Jan 2021 17:01:44 +0000 Message-Id: <20210118170148.3126186-24-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's already a hidden compound_head() call in trylock_page(), so just make it explicit in the caller, which may later have a folio for its own reasons. This saves a call to compound_head() inside __lock_page_or_retry(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 10 +++++----- mm/filemap.c | 16 +++++++--------- mm/memory.c | 10 +++++----- 3 files changed, 17 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d28b53f91275..d287e1a680e8 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -621,7 +621,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __lock_folio(struct folio *folio); int __lock_folio_killable(struct folio *folio); int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait); -extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +int __lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags); void unlock_folio(struct folio *folio); @@ -702,17 +702,17 @@ static inline int lock_folio_async(struct folio *folio, } /* - * lock_page_or_retry - Lock the page, unless this would block and the + * lock_folio_or_retry - Lock the folio, unless this would block and the * caller indicated that it can handle a retry. * * Return value and mmap_lock implications depend on flags; see - * __lock_page_or_retry(). + * __lock_folio_or_retry(). */ -static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm, +static inline int lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags) { might_sleep(); - return trylock_page(page) || __lock_page_or_retry(page, mm, flags); + return trylock_folio(folio) || __lock_folio_or_retry(folio, mm, flags); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 7670e0c7dd97..4ece44f694f6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1538,20 +1538,18 @@ int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * 1 - page is locked; mmap_lock is still held. - * 0 - page is not locked. + * 1 - folio is locked; mmap_lock is still held. + * 0 - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 1 - * with the page locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock unperturbed. */ -int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +int __lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags) { - struct folio *folio = page_folio(page); - if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released @@ -1562,9 +1560,9 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) - wait_on_page_locked_killable(page); + wait_on_folio_locked_killable(folio); else - wait_on_page_locked(page); + wait_on_folio_locked(folio); return 0; } if (flags & FAULT_FLAG_KILLABLE) { @@ -2749,7 +2747,7 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, * @page - the page to lock. * @fpin - the pointer to the file we may pin (or is already pinned). * - * This works similar to lock_page_or_retry in that it can drop the mmap_lock. + * This works similar to lock_folio_or_retry in that it can drop the mmap_lock. * It differs in that it actually returns the page locked if it returns 1 and 0 * if it couldn't lock the page. If we did have to drop the mmap_lock then fpin * will point to the pinned file and needs to be fput()'ed at a later point. diff --git a/mm/memory.c b/mm/memory.c index 8079c181efff..f5490cafe531 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3352,7 +3352,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags); + locked = lock_folio_or_retry(page_folio(page), vma->vm_mm, vmf->flags); delayacct_clear_flag(DELAYACCT_PF_SWAPIN); if (!locked) { @@ -4104,7 +4104,7 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults). * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). * If mmap_lock is released, vma may become invalid (for example * by other thread calling munmap()). */ @@ -4338,7 +4338,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) * concurrent faults). * * The mmap_lock may have been released depending on flags and our return value. - * See filemap_fault() and __lock_page_or_retry(). + * See filemap_fault() and __lock_folio_or_retry(). */ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) { @@ -4431,7 +4431,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). */ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags) @@ -4587,7 +4587,7 @@ static inline void mm_account_fault(struct pt_regs *regs, * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs) From patchwork Mon Jan 18 17:01:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F825C433E0 for ; Mon, 18 Jan 2021 17:04:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 45CFD221EB for ; Mon, 18 Jan 2021 17:04:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45CFD221EB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 574988D0018; Mon, 18 Jan 2021 12:04:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A6486B02B1; Mon, 18 Jan 2021 12:04:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1736F8D0018; Mon, 18 Jan 2021 12:04:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id ADE158D001E for ; Mon, 18 Jan 2021 12:04:25 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7475A282B for ; Mon, 18 Jan 2021 17:04:25 +0000 (UTC) X-FDA: 77719519290.28.cream23_1d0f74b2754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 01902612D for ; Mon, 18 Jan 2021 17:03:40 +0000 (UTC) X-HE-Tag: cream23_1d0f74b2754a X-Filterd-Recvd-Size: 3432 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2oB1J7YPMqOqaptpQQF6LuJLozA0Blu+qF50Ou3wc1Y=; b=jwWXeT6uvhvwRd/0/yr6+snhwD wLTtpIvGcY/vg7Y9pfjmp2YljT7XGF7EwuX5A78Si46+/Jr6hUnTJFuODx40dnXPoljuSyXCfaERf vq8yK/k97dYZtggpr+BtE8rzSv+jh3HZ+Peo1++KA+EnRaUnR1DK9QpTDcqV3kXSTFmz2APAbbD/M 9KTFvq2q1rji3WO5P5HbMQjI4wEoCwyyAQBNUx7PpQ3aTYZJG78lR1XWLOMv5fCuACmZmobPfXLsj +1r80vIx+6WDZT+0X0mSgr72VDaRDUDr/iitXpeLdXBJSf7XVhuSCpVk2gfiuAOs/NunXH/dYoiz5 qjg0ensQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XwC-00D7Wy-0c; Mon, 18 Jan 2021 17:03:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 24/27] mm/filemap: Convert wake_up_page_bit to wake_up_folio_bit Date: Mon, 18 Jan 2021 17:01:45 +0000 Message-Id: <20210118170148.3126186-25-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers have a folio, so use it directly. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 4ece44f694f6..a2d9ee6e78ae 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1089,14 +1089,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, return (flags & WQ_FLAG_EXCLUSIVE) != 0; } -static void wake_up_page_bit(struct page *page, int bit_nr) +static void wake_up_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); + wait_queue_head_t *q = page_waitqueue(&folio->page); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = page; + key.page = &folio->page; key.bit_nr = bit_nr; key.page_match = 0; @@ -1131,7 +1131,7 @@ static void wake_up_page_bit(struct page *page, int bit_nr) * page waiters. */ if (!waitqueue_active(q) || !key.page_match) { - ClearPageWaiters(page); + ClearFolioWaiters(folio); /* * It's possible to miss clearing Waiters here, when we woke * our page waiters, but the hashed waitqueue has waiters for @@ -1147,7 +1147,7 @@ static void wake_up_folio(struct folio *folio, int bit) { if (!FolioWaiters(folio)) return; - wake_up_page_bit(&folio->page, bit); + wake_up_folio_bit(folio, bit); } /* @@ -1447,7 +1447,7 @@ void unlock_folio(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); VM_BUG_ON_FOLIO(!FolioLocked(folio), folio); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio))) - wake_up_page_bit(&folio->page, PG_locked); + wake_up_folio_bit(folio, PG_locked); } EXPORT_SYMBOL(unlock_folio); From patchwork Mon Jan 18 17:01:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02357C433E0 for ; Mon, 18 Jan 2021 17:03:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E9F222C9C for ; Mon, 18 Jan 2021 17:03:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E9F222C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D56B6B0162; Mon, 18 Jan 2021 12:03:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 286F16B020B; Mon, 18 Jan 2021 12:03:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19D066B024A; Mon, 18 Jan 2021 12:03:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 04C036B0162 for ; Mon, 18 Jan 2021 12:03:37 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A7E9A180AD801 for ; Mon, 18 Jan 2021 17:03:36 +0000 (UTC) X-FDA: 77719517232.04.tin31_27141032754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 6F5AC8002246 for ; Mon, 18 Jan 2021 17:03:36 +0000 (UTC) X-HE-Tag: tin31_27141032754a X-Filterd-Recvd-Size: 4841 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2qoQOefQjZT+2cBU0HKwV7AQGtpSse4IDI44+wTaQRY=; b=CFHhG92YpHczS4DCop1lhWMPQB IV6stWUPh0doznALnfwn3pJa7bsBbZV7RnCEZ/Sp9Tg8qhZrqCdI6eCWziLkerfGF/JA5qw7X9A72 ficE7Yf/jZoPPm/w/uH00ULSV5c+jARgeJN1syeDe5DDbsTcG3pDuz/omofPeyNM7R5h79Vqq0kW9 +ObHUmhJWC1QOY5KMvMp49fmWC7FRTBQw4N4fr4BJbTIgThKoWop2//EqQ9WjDOzEAZYEPJU81psI vJwJQ4GqJfMMKpTTmOEZCpIcKQSQ78GOD5PWwmsH7wh32ZYeGHzQQxEO1k4XGmblkA297BnIH6nUF 2IE43X5Q==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XwD-00D7XC-0o; Mon, 18 Jan 2021 17:03:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 25/27] mm: Convert test_clear_page_writeback to test_clear_folio_writeback Date: Mon, 18 Jan 2021 17:01:46 +0000 Message-Id: <20210118170148.3126186-26-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The one caller of test_clear_page_writeback() already has a folio, so make it clear that test_clear_page_writeback() operates on the entire folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page-flags.h | 2 +- mm/filemap.c | 2 +- mm/page-writeback.c | 18 +++++++++--------- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ef0f68320917..b43601f5c338 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -594,7 +594,7 @@ static __always_inline void SetPageUptodate(struct page *page) CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) -int test_clear_page_writeback(struct page *page); +int test_clear_folio_writeback(struct folio *folio); int __test_set_page_writeback(struct page *page, bool keep_write); #define test_set_page_writeback(page) \ diff --git a/mm/filemap.c b/mm/filemap.c index a2d9ee6e78ae..2b6caa0f9f93 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1476,7 +1476,7 @@ void end_folio_writeback(struct folio *folio) * reused before the wake_up_folio(). */ get_folio(folio); - if (!test_clear_page_writeback(&folio->page)) + if (!test_clear_folio_writeback(folio)) BUG(); smp_mb__after_atomic(); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 908fc7f60ae7..db8a99e4a3d2 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2719,24 +2719,24 @@ int clear_page_dirty_for_io(struct page *page) } EXPORT_SYMBOL(clear_page_dirty_for_io); -int test_clear_page_writeback(struct page *page) +int test_clear_folio_writeback(struct folio *folio) { - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio_mapping(folio); struct mem_cgroup *memcg; struct lruvec *lruvec; int ret; - memcg = lock_page_memcg(page); - lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + memcg = lock_folio_memcg(folio); + lruvec = mem_cgroup_folio_lruvec(folio, folio_pgdat(folio)); if (mapping && mapping_use_writeback_tags(mapping)) { struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); unsigned long flags; xa_lock_irqsave(&mapping->i_pages, flags); - ret = TestClearPageWriteback(page); + ret = TestClearFolioWriteback(folio); if (ret) { - __xa_clear_mark(&mapping->i_pages, page_index(page), + __xa_clear_mark(&mapping->i_pages, folio_index(folio), PAGECACHE_TAG_WRITEBACK); if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) { struct bdi_writeback *wb = inode_to_wb(inode); @@ -2752,12 +2752,12 @@ int test_clear_page_writeback(struct page *page) xa_unlock_irqrestore(&mapping->i_pages, flags); } else { - ret = TestClearPageWriteback(page); + ret = TestClearFolioWriteback(folio); } if (ret) { dec_lruvec_state(lruvec, NR_WRITEBACK); - dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); - inc_node_page_state(page, NR_WRITTEN); + dec_zone_folio_stat(folio, NR_ZONE_WRITE_PENDING); + inc_node_folio_stat(folio, NR_WRITTEN); } __unlock_page_memcg(memcg); return ret; From patchwork Mon Jan 18 17:01:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6002C433DB for ; Mon, 18 Jan 2021 17:03:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94B4122C9C for ; Mon, 18 Jan 2021 17:03:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94B4122C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 609ED6B025A; Mon, 18 Jan 2021 12:03:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BC616B025C; Mon, 18 Jan 2021 12:03:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D3B16B025E; Mon, 18 Jan 2021 12:03:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 31DCD6B025A for ; Mon, 18 Jan 2021 12:03:55 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CE71C181AF5C6 for ; Mon, 18 Jan 2021 17:03:54 +0000 (UTC) X-FDA: 77719517988.11.wing13_13159292754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id E73E9180F9825 for ; Mon, 18 Jan 2021 17:03:50 +0000 (UTC) X-HE-Tag: wing13_13159292754a X-Filterd-Recvd-Size: 6486 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sgzWDCA2cTSijPWdk4PnVmFhhShmXfcDIfAJcnswFEg=; b=t5W1BwjfxX92fxw8hvcL2WdYrE WYJa04g7l+B9ksoEIyKgzsEPo2Prf6dgTAdLF60bljwvYy3QVNm50S217GC3KZqz72N0iRoNUGEW/ ZeImvzpEcvna6sYGZ5o4ibBxX+waF6Hq9UspE6tWZYQWb7pooVmJuR6T+klY5s/xtlWVsn0XHMD8f 4HX0oyXqBoNJzNYFIXapLhFKmfTb/Swr6TLW3pP/+FTkGkUEXGFDgVCxDxYqWlxTiAKKHS6jkbfvT U6FJZdNLFvLiz0Nynd2hUioXHchhWLglIaq5uvIl2BlPswfakYS4wwniDK8l/FazIgjmRxyRAblTx 7CizR03Q==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XwD-00D7XG-Vk; Mon, 18 Jan 2021 17:03:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 26/27] mm/filemap: Convert page wait queues to be folios Date: Mon, 18 Jan 2021 17:01:47 +0000 Message-Id: <20210118170148.3126186-27-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 6 +++--- mm/filemap.c | 30 ++++++++++++++++-------------- 2 files changed, 19 insertions(+), 17 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d287e1a680e8..cf235fd60478 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -594,13 +594,13 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, /* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */ struct wait_page_key { - struct page *page; + struct folio *folio; int bit_nr; int page_match; }; struct wait_page_queue { - struct page *page; + struct folio *folio; int bit_nr; wait_queue_entry_t wait; }; @@ -608,7 +608,7 @@ struct wait_page_queue { static inline bool wake_page_match(struct wait_page_queue *wait_page, struct wait_page_key *key) { - if (wait_page->page != key->page) + if (wait_page->folio != key->folio) return false; key->page_match = 1; diff --git a/mm/filemap.c b/mm/filemap.c index 2b6caa0f9f93..803a28f7f718 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -987,11 +987,11 @@ EXPORT_SYMBOL(__page_cache_alloc); */ #define PAGE_WAIT_TABLE_BITS 8 #define PAGE_WAIT_TABLE_SIZE (1 << PAGE_WAIT_TABLE_BITS) -static wait_queue_head_t page_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; +static wait_queue_head_t folio_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; -static wait_queue_head_t *page_waitqueue(struct page *page) +static wait_queue_head_t *folio_waitqueue(struct folio *folio) { - return &page_wait_table[hash_ptr(page, PAGE_WAIT_TABLE_BITS)]; + return &folio_wait_table[hash_ptr(folio, PAGE_WAIT_TABLE_BITS)]; } void __init pagecache_init(void) @@ -999,7 +999,7 @@ void __init pagecache_init(void) int i; for (i = 0; i < PAGE_WAIT_TABLE_SIZE; i++) - init_waitqueue_head(&page_wait_table[i]); + init_waitqueue_head(&folio_wait_table[i]); page_writeback_init(); } @@ -1054,10 +1054,11 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, */ flags = wait->flags; if (flags & WQ_FLAG_EXCLUSIVE) { - if (test_bit(key->bit_nr, &key->page->flags)) + if (test_bit(key->bit_nr, &key->folio->page.flags)) return -1; if (flags & WQ_FLAG_CUSTOM) { - if (test_and_set_bit(key->bit_nr, &key->page->flags)) + if (test_and_set_bit(key->bit_nr, + &key->folio->page.flags)) return -1; flags |= WQ_FLAG_DONE; } @@ -1091,12 +1092,12 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, static void wake_up_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = &folio->page; + key.folio = folio; key.bit_nr = bit_nr; key.page_match = 0; @@ -1188,7 +1189,7 @@ int sysctl_page_lock_unfairness = 5; static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1208,7 +1209,7 @@ static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, init_wait(wait); wait->func = wake_page_function; - wait_page.page = &folio->page; + wait_page.folio = folio; wait_page.bit_nr = bit_nr; repeat: @@ -1340,10 +1341,10 @@ EXPORT_SYMBOL(wait_on_folio_bit_killable); static int __wait_on_folio_locked_async(struct folio *folio, struct wait_page_queue *wait, bool set) { - struct wait_queue_head *q = page_waitqueue(&folio->page); + struct wait_queue_head *q = folio_waitqueue(folio); int ret = 0; - wait->page = &folio->page; + wait->folio = folio; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); @@ -1400,12 +1401,13 @@ void put_and_wait_on_page_locked(struct page *page) */ void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter) { - wait_queue_head_t *q = page_waitqueue(page); + struct folio *folio = page_folio(page); + wait_queue_head_t *q = folio_waitqueue(folio); unsigned long flags; spin_lock_irqsave(&q->lock, flags); __add_wait_queue_entry_tail(q, waiter); - SetPageWaiters(page); + SetFolioWaiters(folio); spin_unlock_irqrestore(&q->lock, flags); } EXPORT_SYMBOL_GPL(add_page_wait_queue); From patchwork Mon Jan 18 17:01:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12027867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6EA6C433DB for ; Mon, 18 Jan 2021 17:03:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8037C22C9D for ; Mon, 18 Jan 2021 17:03:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8037C22C9D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0DE6B6B024C; Mon, 18 Jan 2021 12:03:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F09186B024F; Mon, 18 Jan 2021 12:03:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF9A98D0018; Mon, 18 Jan 2021 12:03:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id C79AB6B024C for ; Mon, 18 Jan 2021 12:03:44 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5A8502494 for ; Mon, 18 Jan 2021 17:03:44 +0000 (UTC) X-FDA: 77719517568.30.toad37_3c17f712754a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 71C061800864E for ; Mon, 18 Jan 2021 17:03:41 +0000 (UTC) X-HE-Tag: toad37_3c17f712754a X-Filterd-Recvd-Size: 3665 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Jan 2021 17:03:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cTOpJKpaWG1T2olInxdo/JtYe/CGbbNz/gPOn4Fxrq8=; b=p6E17oGJgO7ca7drhZoZJR4lZT xxu34oeTRCxsNIquO7WC0+nBCAGmKRFwXddlh+q2zv01RvxrrW4CC/f6uUbmhF3HATUxP0cYfzKRS taOO2mli0NWSv6g+pOCkIPEoiQMflnX7AuC228Q7kdtsRFcVJwg5VpVPO3U4z+CQ2k4c0Q185AJZH rQhcdJskwqxUD5W39N55foJV651CllKXZ3au9IOPYeDsKyLC12Ns5GjJ6TfTELRVIW6r2gyY+7IY2 iJr+frT09M94ofbwC0ljOja5UBnwEVJlj2WAEBGuUsgpB/g9b/9A5voGXBaGtjvC44PKO4pPNyNa8 Sd6y0lZw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l1XwF-00D7XQ-3l; Mon, 18 Jan 2021 17:03:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 27/27] cachefiles: Switch to wait_page_key Date: Mon, 18 Jan 2021 17:01:48 +0000 Message-Id: <20210118170148.3126186-28-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210118170148.3126186-1-willy@infradead.org> References: <20210118170148.3126186-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Cachefiles was relying on wait_page_key and wait_bit_key being the same layout, which is fragile. Now that wait_page_key is exposed in the pagemap.h header, we can remove that fragility. Also switch it to use the folio directly instead of the page. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cachefiles/rdwr.c | 13 ++++++------- include/linux/pagemap.h | 1 - 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 8bda092e60c5..f64b2d01578b 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -24,22 +24,21 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode, container_of(wait, struct cachefiles_one_read, monitor); struct cachefiles_object *object; struct fscache_retrieval *op = monitor->op; - struct wait_bit_key *key = _key; - struct page *page = wait->private; + struct wait_page_key *key = _key; + struct folio *folio = wait->private; ASSERT(key); _enter("{%lu},%u,%d,{%p,%u}", monitor->netfs_page->index, mode, sync, - key->flags, key->bit_nr); + key->folio, key->bit_nr); - if (key->flags != &page->flags || - key->bit_nr != PG_locked) + if (key->folio != folio || key->bit_nr != PG_locked) return 0; - _debug("--- monitor %p %lx ---", page, page->flags); + _debug("--- monitor %p %lx ---", folio, folio->page.flags); - if (!PageUptodate(page) && !PageError(page)) { + if (!FolioUptodate(folio) && !FolioError(folio)) { /* unlocked, not uptodate and not erronous? */ _debug("page probably truncated"); } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index cf235fd60478..a0c5345041be 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -592,7 +592,6 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, return pgoff; } -/* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */ struct wait_page_key { struct folio *folio; int bit_nr;