From patchwork Tue Dec 8 19:46:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49949C4361B for ; Tue, 8 Dec 2020 20:12:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0080E23A02 for ; Tue, 8 Dec 2020 20:12:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0080E23A02 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 97EE86B006C; Tue, 8 Dec 2020 15:12:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 931AA6B006E; Tue, 8 Dec 2020 15:12:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822786B0070; Tue, 8 Dec 2020 15:12:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 6C8176B006C for ; Tue, 8 Dec 2020 15:12:01 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 33F09180AD801 for ; Tue, 8 Dec 2020 20:12:01 +0000 (UTC) X-FDA: 77571211242.03.glue82_000345a273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 19A8B28A4EC for ; Tue, 8 Dec 2020 20:12:01 +0000 (UTC) X-HE-Tag: glue82_000345a273e9 X-Filterd-Recvd-Size: 3367 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 20:12:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GXe88oFVuW5E50eWvehgyMPXo+ZSI7ysxpRbYPxEyKw=; b=L1OAotuFTl9MqvWQhrRpWjeJg7 +NIHpnE87lRzWc6LFtgOO507tVsQcSG8RjCTgQb1Y+pwZEjOfkRrNZAkeN+6Tf00VPF7V6wHDYXsc z0DaFe7sTFsHdOSAuqm5IXnFATDBFn+NTCo7w7V/8mbGo8/zkecQr78Vns995egjAMN2UFVwj0WtO LBv0s1Jinda1fnTss8fp25rI+mjD0vvoHj4DsPc2aE/0PDlS/gRWtNqOJ51JN7E457oqgBG5h9V2n PGfXHzIi+0Q1uca8c1BAjYhlG68a16ShaOeZG2+gOUnoTc7l6o6q1SwXfLvQALum4+a/nRTBBvOfK flYx11kw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwr-00050B-Lk; Tue, 08 Dec 2020 19:46:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 01/11] mm: Introduce struct folio Date: Tue, 8 Dec 2020 19:46:43 +0000 Message-Id: <20201208194653.19180-2-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have trouble keeping track of whether we've already called compound_head() to ensure we're not operating on a tail page. Further, it's never clear whether we intend a struct page to refer to PAGE_SIZE bytes or page_size(compound_head(page)). Introduce a new type 'struct folio' that always refers to an entire (possibly compound) page, and points to the head page (or base page). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 5 +++++ include/linux/mm_types.h | 17 +++++++++++++++++ 2 files changed, 22 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index d1f64744ace2..7db9a10f084b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -916,6 +916,11 @@ static inline unsigned int compound_order(struct page *page) return page[1].compound_order; } +static inline unsigned int folio_order(struct folio *folio) +{ + return compound_order(&folio->page); +} + static inline bool hpage_pincount_available(struct page *page) { /* diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 65df8abd90bd..d7e487d9998f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -223,6 +223,23 @@ struct page { #endif } _struct_page_alignment; +/* + * A struct folio is either a base (order-0) page or the head page of + * a compound page. + */ +struct folio { + struct page page; +}; + +static inline struct folio *page_folio(struct page *page) +{ + unsigned long head = READ_ONCE(page->compound_head); + + if (unlikely(head & 1)) + return (struct folio *)(head - 1); + return (struct folio *)page; +} + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; From patchwork Tue Dec 8 19:46:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0512C4167B for ; Tue, 8 Dec 2020 19:47:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7599223C17 for ; Tue, 8 Dec 2020 19:47:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7599223C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A69576B005D; Tue, 8 Dec 2020 14:47:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D1256B006C; Tue, 8 Dec 2020 14:47:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70B036B0068; Tue, 8 Dec 2020 14:47:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 511AD6B005D for ; Tue, 8 Dec 2020 14:47:02 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 134DF3628 for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) X-FDA: 77571148284.11.pain12_300d797273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id DF0DE180F8B80 for ; Tue, 8 Dec 2020 19:47:01 +0000 (UTC) X-HE-Tag: pain12_300d797273e9 X-Filterd-Recvd-Size: 2962 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OBRZ5khcW9DTJib4sejfdcqxs/Ms3DkVSu08A7XMxMo=; b=e5HzxTpqSfCxo8BAzX4WhMvbOc CMQJ2Cs/JtSeAJJqIQXMhuK82i/EmNbQLFVI3DD41borkTuwJJnk9GOw3lSYQgWfd5dKUXd9T5kUh xX0V0X6+TgsZvIsIr2lRh1N/LNBIqzZa3kNL/O+g5HmWLExgD29KorE3blrRtJH6bbpTeethEZQT3 B22ZuGbOv00dFtfc+YlXSJSwVyL+Xku4NwHiFqob1SCTlsPx9mHlVrVLJaRkAEBHyLQUaXxu42WAN pGmBCOpNRDfVwazXrK7rxjOrMeqY/RXP8d+jyj57whX8PzZUwrxs5d4oU+eV89AZey3wEqIyWXwOU 6485/XNg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwr-00050H-SY; Tue, 08 Dec 2020 19:46:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 02/11] mm: Add put_folio Date: Tue, 8 Dec 2020 19:46:44 +0000 Message-Id: <20201208194653.19180-3-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call put_folio() instead of put_page() and save the overhead of calling compound_head(). Also skips the devmap checks. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7db9a10f084b..80d38cc9561c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1189,9 +1189,15 @@ static inline __must_check bool try_get_page(struct page *page) return true; } +static inline void put_folio(struct folio *folio) +{ + if (put_page_testzero(&folio->page)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { - page = compound_head(page); + struct folio *folio = page_folio(page); /* * For devmap managed pages we need to catch refcount transition from @@ -1199,13 +1205,12 @@ static inline void put_page(struct page *page) * need to inform the device driver through callback. See * include/linux/memremap.h and HMM for details. */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); + if (page_is_devmap_managed(&folio->page)) { + put_devmap_managed_page(&folio->page); return; } - if (put_page_testzero(page)) - __put_page(page); + put_folio(folio); } /* From patchwork Tue Dec 8 19:46:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 280F3C4167B for ; Tue, 8 Dec 2020 19:47:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0FF323C17 for ; Tue, 8 Dec 2020 19:47:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0FF323C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F0116B006E; Tue, 8 Dec 2020 14:47:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27B796B0071; Tue, 8 Dec 2020 14:47:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13FBB6B0072; Tue, 8 Dec 2020 14:47:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id DECEA6B006E for ; Tue, 8 Dec 2020 14:47:03 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A6A0F181AEF1A for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) X-FDA: 77571148326.17.form19_1b01fdb273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 7FA50180D0185 for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) X-HE-Tag: form19_1b01fdb273e9 X-Filterd-Recvd-Size: 2885 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:47:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Zgk3uniXZHMKVYayYIMFcPF5p6Rw4gFHSO/CHoSUCok=; b=NqZO69OQznUEUwJBcNulHAJYsw LOy/OrRJNK+7F7WWhejVoD3LMVv1Ie8o6d59//iJ9605oRObFWGJRK+bGuwK49+Rocv78u9UhML/l b7wFc9aEcvEynjrhFqfuUwLQogSxkTmslcQxgNG9WMC5uQGjRllyWU32xmIPKb9w11BI75yEApUGK DbSZ1pDpRpb6kTc4SprKCXdfDjICcEOElFQdPoluvSawqDcm7oaa3YZgdO3vH+zCSve9a+ooiSR8g yoHK2Ak4eD+y42KipwScwZQFYI+cH4qbHJzbt85qOmLuWdwFfj0hUESNFy5MfEt+Op1h8hCCy5GjJ FJxIuKhg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiws-00050P-8R; Tue, 08 Dec 2020 19:46:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 03/11] mm: Add get_folio Date: Tue, 8 Dec 2020 19:46:45 +0000 Message-Id: <20201208194653.19180-4-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call get_folio() instead of get_page() and save the overhead of calling compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 80d38cc9561c..32ac5c14097d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1167,15 +1167,17 @@ static inline bool is_pci_p2pdma_page(const struct page *page) #define page_ref_zero_or_close_to_overflow(page) \ ((unsigned int) page_ref_count(page) + 127u <= 127u) +static inline void get_folio(struct folio *folio) +{ + /* Getting a page requires an already elevated page->_refcount. */ + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(&folio->page), + &folio->page); + page_ref_inc(&folio->page); +} + static inline void get_page(struct page *page) { - page = compound_head(page); - /* - * Getting a normal page or the head of a compound page - * requires to already have an elevated page->_refcount. - */ - VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); - page_ref_inc(page); + get_folio(page_folio(page)); } bool __must_check try_grab_page(struct page *page, unsigned int flags); From patchwork Tue Dec 8 19:46:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76F0DC433FE for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CD8823C17 for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CD8823C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F66B6B005C; Tue, 8 Dec 2020 14:47:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7813A6B005D; Tue, 8 Dec 2020 14:47:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 671036B0068; Tue, 8 Dec 2020 14:47:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 4D9E06B005C for ; Tue, 8 Dec 2020 14:47:01 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BCDDF824999B for ; Tue, 8 Dec 2020 19:47:00 +0000 (UTC) X-FDA: 77571148200.19.dad79_4d026ba273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 976941AD1B1 for ; Tue, 8 Dec 2020 19:47:00 +0000 (UTC) X-HE-Tag: dad79_4d026ba273e9 X-Filterd-Recvd-Size: 6785 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fP8jJQT1HGdo8Zkh2c3hQpc/AoXf8Q9fIce52qG43XU=; b=EKSV054qSyCQpPUcHf1I5x9stR PUx2c4Maov6A929GDJtDnlI1fJkt0whliTlTz2TnBxoyL9VqSW8mU5GpwbxyI4nXvNqU683RmDEvB 0eC/Wbh6cv/GQSBn2S6V/RK+r5rXYA1HztIkxHXdAeKz411sDQsYEkBQ5EYfOnlV70g0XoZcu08bK xkWxyIc63FTGu45bnQ/ZbwY3jXcGOuP8oiZzkrvEvIgumQxu8DUldfYgPWfnIE+xqJl4PHhmIMr1/ SfXuUoweWJ+RTZ9HlbuO1om9Z/Y672e7+LOHQFjh+zuFlcmfPs2o4jzi6luO2OqQL2iUIb3iT6HgW 98wWoJRg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiws-00050W-GL; Tue, 08 Dec 2020 19:46:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 04/11] mm: Create FolioFlags Date: Tue, 8 Dec 2020 19:46:46 +0000 Message-Id: <20201208194653.19180-5-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These new functions are the folio analogues of the PageFlags functions. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page-flags.h | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ec5d0290e0ee..2c51cd4b3630 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -212,6 +212,12 @@ static inline void page_init_poison(struct page *page, size_t size) } #endif +static unsigned long *folio_flags(struct folio *folio) +{ + VM_BUG_ON_PGFLAGS(PagePoisoned(&folio->page), &folio->page); + return &folio->page.flags; +} + /* * Page flags policies wrt compound pages * @@ -260,30 +266,44 @@ static inline void page_init_poison(struct page *page, size_t size) * Macros to create function definitions for page flags */ #define TESTPAGEFLAG(uname, lname, policy) \ +static __always_inline int Folio##uname(struct folio *folio) \ + { return test_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } #define SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void SetFolio##uname(struct folio *folio) \ + { set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void SetPage##uname(struct page *page) \ { set_bit(PG_##lname, &policy(page, 1)->flags); } #define CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void ClearFolio##uname(struct folio *folio) \ + { clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void ClearPage##uname(struct page *page) \ { clear_bit(PG_##lname, &policy(page, 1)->flags); } #define __SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void __SetFolio##uname(struct folio *folio) \ + { __set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void __SetPage##uname(struct page *page) \ { __set_bit(PG_##lname, &policy(page, 1)->flags); } #define __CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void __ClearFolio##uname(struct folio *folio) \ + { __clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline void __ClearPage##uname(struct page *page) \ { __clear_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTSETFLAG(uname, lname, policy) \ +static __always_inline int TestSetFolio##uname(struct folio *folio) \ + { return test_and_set_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int TestSetPage##uname(struct page *page) \ { return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTCLEARFLAG(uname, lname, policy) \ +static __always_inline int TestClearFolio##uname(struct folio *folio) \ + { return test_and_clear_bit(PG_##lname, folio_flags(folio)); } \ static __always_inline int TestClearPage##uname(struct page *page) \ { return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } @@ -302,21 +322,27 @@ static __always_inline int TestClearPage##uname(struct page *page) \ TESTCLEARFLAG(uname, lname, policy) #define TESTPAGEFLAG_FALSE(uname) \ +static inline int Folio##uname(const struct folio *folio) { return 0; } \ static inline int Page##uname(const struct page *page) { return 0; } #define SETPAGEFLAG_NOOP(uname) \ +static inline void SetFolio##uname(struct folio *folio) { } \ static inline void SetPage##uname(struct page *page) { } #define CLEARPAGEFLAG_NOOP(uname) \ +static inline void ClearFolio##uname(struct folio *folio) { } \ static inline void ClearPage##uname(struct page *page) { } #define __CLEARPAGEFLAG_NOOP(uname) \ +static inline void __ClearFolio##uname(struct folio *folio) { } \ static inline void __ClearPage##uname(struct page *page) { } #define TESTSETFLAG_FALSE(uname) \ +static inline int TestSetFolio##uname(struct folio *folio) { return 0; } \ static inline int TestSetPage##uname(struct page *page) { return 0; } #define TESTCLEARFLAG_FALSE(uname) \ +static inline int TestClearFolio##uname(struct folio *folio) { return 0; } \ static inline int TestClearPage##uname(struct page *page) { return 0; } #define PAGEFLAG_FALSE(uname) TESTPAGEFLAG_FALSE(uname) \ @@ -509,11 +535,10 @@ TESTPAGEFLAG_FALSE(Ksm) u64 stable_page_flags(struct page *page); -static inline int PageUptodate(struct page *page) +static inline int FolioUptodate(struct folio *folio) { int ret; - page = compound_head(page); - ret = test_bit(PG_uptodate, &(page)->flags); + ret = test_bit(PG_uptodate, folio_flags(folio)); /* * Must ensure that the data we read out of the page is loaded * _after_ we've loaded page->flags to check for PageUptodate. @@ -528,6 +553,11 @@ static inline int PageUptodate(struct page *page) return ret; } +static inline int PageUptodate(struct page *page) +{ + return FolioUptodate(page_folio(page)); +} + static __always_inline void __SetPageUptodate(struct page *page) { VM_BUG_ON_PAGE(PageTail(page), page); From patchwork Tue Dec 8 19:46:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 245E6C433FE for ; Tue, 8 Dec 2020 19:47:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94F5D23C17 for ; Tue, 8 Dec 2020 19:47:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94F5D23C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 31A676B0068; Tue, 8 Dec 2020 14:47:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 252A56B0071; Tue, 8 Dec 2020 14:47:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E87F96B006E; Tue, 8 Dec 2020 14:47:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id CA1E76B0068 for ; Tue, 8 Dec 2020 14:47:02 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 954441EE6 for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) X-FDA: 77571148284.08.fly71_500a82f273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 786EF1819E76F for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) X-HE-Tag: fly71_500a82f273e9 X-Filterd-Recvd-Size: 4712 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:47:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uWCZnlKBNIoKAATMrB8Qjh7qh+SV4Lj3PLacia1sewI=; b=OptOLtv1sgOFbFuWJ83HzKZfnY kMoL3pehTIHoaKLhc6EJmKuyBzeR82PmS0IAaAq0tMLRc6GrAmpo6dIHr39h+EarleBhn6taoYEso UkxvEOnyGth1AynkMpX6IBzI9pK+ILS4eisHUoPxGA+3lfYRFQWi3b7SOpnBjuRf14F18NZ7QiMF7 sQNSsrhYxeleB/n7RnxS69qkoP5n86vMT42pZWgTgJP7yXpu4d+j945bOUiKOzPWI2Cf5KN6kVHAA FP6dgQhRu7vcrmLT5kkBw2RP9m4UTV5ZcHtuzKHK40I1OsU2BCBESxRPHpPySX6DKCwIs8Rvn9g0d 2MSSgMqg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiws-00050d-OO; Tue, 08 Dec 2020 19:46:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 05/11] mm: Add unlock_folio Date: Tue, 8 Dec 2020 19:46:47 +0000 Message-Id: <20201208194653.19180-6-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert unlock_page() to call unlock_folio(). By using a folio we avoid doing a repeated compound_head() This shortens the function from 120 bytes to 76 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 16 +++++++++++++++- mm/filemap.c | 27 ++++++++++----------------- 2 files changed, 25 insertions(+), 18 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 46d4b1704770..64ae1bb62765 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -588,7 +588,21 @@ extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); -extern void unlock_page(struct page *page); +extern void unlock_folio(struct folio *folio); + +/** + * unlock_page - Unlock a locked page. + * @page: The page. + * + * Unlocks the page and wakes up any thread sleeping on the page lock. + * + * Context: May be called from interrupt or process context. May not be + * called from NMI context. + */ +static inline void unlock_page(struct page *page) +{ + return unlock_folio(page_folio(page)); +} /* * Return true if the page was successfully locked diff --git a/mm/filemap.c b/mm/filemap.c index 78090ee08ac2..de8372307b33 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1443,29 +1443,22 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem #endif /** - * unlock_page - unlock a locked page - * @page: the page + * unlock_folio - Unlock a locked folio. + * @folio: The folio. * - * Unlocks the page and wakes up sleepers in wait_on_page_locked(). - * Also wakes sleepers in wait_on_page_writeback() because the wakeup - * mechanism between PageLocked pages and PageWriteback pages is shared. - * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep. + * Unlocks the folio and wakes up any thread sleeping on the page lock. * - * Note that this depends on PG_waiters being the sign bit in the byte - * that contains PG_locked - thus the BUILD_BUG_ON(). That allows us to - * clear the PG_locked bit and test PG_waiters at the same time fairly - * portably (architectures that do LL/SC can test any bit, while x86 can - * test the sign bit). + * Context: May be called from interrupt or process context. May not be + * called from NMI context. */ -void unlock_page(struct page *page) +void unlock_folio(struct folio *folio) { BUILD_BUG_ON(PG_waiters != 7); - page = compound_head(page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (clear_bit_unlock_is_negative_byte(PG_locked, &page->flags)) - wake_up_page_bit(page, PG_locked); + VM_BUG_ON_PAGE(!FolioLocked(folio), &folio->page); + if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio))) + wake_up_page_bit(&folio->page, PG_locked); } -EXPORT_SYMBOL(unlock_page); +EXPORT_SYMBOL(unlock_folio); /** * end_page_writeback - end writeback against a page From patchwork Tue Dec 8 19:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51C12C4361B for ; Tue, 8 Dec 2020 19:47:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5C7A923C18 for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C7A923C18 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70F4B6B0070; Tue, 8 Dec 2020 14:47:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64A8B6B006E; Tue, 8 Dec 2020 14:47:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49D246B006C; Tue, 8 Dec 2020 14:47:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id 348396B005D for ; Tue, 8 Dec 2020 14:47:02 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EA876180AD811 for ; Tue, 8 Dec 2020 19:47:01 +0000 (UTC) X-FDA: 77571148242.01.toes39_2b0ae98273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id CBDEF10048E1C for ; Tue, 8 Dec 2020 19:47:01 +0000 (UTC) X-HE-Tag: toes39_2b0ae98273e9 X-Filterd-Recvd-Size: 6284 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:47:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8VJ332uTBlKXmEE3FnlTYYTEEXOjnPvsTHCMA5xgkYk=; b=fW525uEy7EWOYwtBtvHWsaXDCd yBv8ZrLPsBupfeIx+NvHoxRXHr/FGyD0BwUExbh1sW7AJW9XFbriX3n+o5CXIF8tBurFNDePShzjw bSkDlpvJWbx5QoRXwtlP8VpPlg/VMbsDM3UL2w3PJUaKo46XCyy3FcbYHI7fuN29x/+ATgGRlf+yp TATTmQHfiWY5ObTvs0Vf0GcvD4xuBvZFWTCRhOFYzOGl2sNoZ7WN6tTP4zTGKwUVcj5/ANjBYL584 tvglFaXrLnXAlO7iQ+J6Y/ODiBFJEJCwYgv/rUuPtiRiG9nHzpBByIBUvMRgX6I8AIfo2IBH58j32 dCI3ZHLA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwt-00050k-0i; Tue, 08 Dec 2020 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 06/11] mm: Add lock_folio Date: Tue, 8 Dec 2020 19:46:48 +0000 Message-Id: <20201208194653.19180-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page() but for use by callers who know they have a folio. Convert __lock_page() to be __lock_folio(). This saves one call to compound_head() per contended call to lock_page(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 21 +++++++++++++++------ mm/filemap.c | 29 +++++++++++++++-------------- 2 files changed, 30 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 64ae1bb62765..1d4a1828a434 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -583,7 +583,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, return true; } -extern void __lock_page(struct page *page); +extern void __lock_folio(struct folio *folio); extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, @@ -604,13 +604,24 @@ static inline void unlock_page(struct page *page) return unlock_folio(page_folio(page)); } +static inline bool trylock_folio(struct folio *folio) +{ + return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio))); +} + /* * Return true if the page was successfully locked */ static inline int trylock_page(struct page *page) { - page = compound_head(page); - return (likely(!test_and_set_bit_lock(PG_locked, &page->flags))); + return trylock_folio(page_folio(page)); +} + +static inline void lock_folio(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + __lock_folio(folio); } /* @@ -618,9 +629,7 @@ static inline int trylock_page(struct page *page) */ static inline void lock_page(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - __lock_page(page); + lock_folio(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index de8372307b33..8e87906f5dd6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1160,7 +1160,7 @@ static void wake_up_page(struct page *page, int bit) */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like - * __lock_page() waiting on then setting PG_locked. + * __lock_folio() waiting on then setting PG_locked. */ SHARED, /* Hold ref to page and check the bit when woken, like * wait_on_page_writeback() waiting on PG_writeback. @@ -1523,17 +1523,16 @@ void page_endio(struct page *page, bool is_write, int err) EXPORT_SYMBOL_GPL(page_endio); /** - * __lock_page - get a lock on the page, assuming we need to sleep to get it - * @__page: the page to lock + * __lock_folio - Get a lock on the folio, assuming we need to sleep to get it. + * @folio: The folio to lock */ -void __lock_page(struct page *__page) +void __lock_folio(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } -EXPORT_SYMBOL(__lock_page); +EXPORT_SYMBOL(__lock_folio); int __lock_page_killable(struct page *__page) { @@ -1587,10 +1586,10 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, return 0; } } else { - __lock_page(page); + __lock_folio(page_folio(page)); } - return 1; + return 1; } /** @@ -2764,7 +2763,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, struct file **fpin) { - if (trylock_page(page)) + struct folio *folio = page_folio(page); + + if (trylock_folio(folio)) return 1; /* @@ -2777,7 +2778,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(page)) { + if (__lock_page_killable(&folio->page)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals @@ -2789,11 +2790,11 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, return 0; } } else - __lock_page(page); + __lock_folio(folio); + return 1; } - /* * Synchronous readahead happens when we don't even find a page in the page * cache at all. We don't want to perform IO under the mmap sem, so if we have From patchwork Tue Dec 8 19:46:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1654C4361B for ; Tue, 8 Dec 2020 19:47:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 807D023C17 for ; Tue, 8 Dec 2020 19:47:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 807D023C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 783A36B006C; Tue, 8 Dec 2020 14:47:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C1F86B0071; Tue, 8 Dec 2020 14:47:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 317466B006C; Tue, 8 Dec 2020 14:47:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 0E30E6B0068 for ; Tue, 8 Dec 2020 14:47:03 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BDA1F3628 for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) X-FDA: 77571148284.23.word00_550c316273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 9E72737606 for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) X-HE-Tag: word00_550c316273e9 X-Filterd-Recvd-Size: 5333 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:47:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xt5mgd5xPZqp/ehXI/kHMzNBJyy6JtKd8fa1WI3RK7s=; b=ZCufl/DRva/T7N4vYBk+eoB5hv rdIzHzWI6eIhcfpLfkQsthl8hpvq2DdzTPs4x0tjtLvb4FRp0/Llfowg8LMhJI7bKO84Wn0HWBHqa swDkv0YflMbW86v2yrLxQZJMQ1t0XI6ijqh8b56xv9e/J4tkGtQV20R3l14WacPgAwJuNlwW7kzFC iLyu5veeMzW/975HkpqBmSVMCMfN6Efnl4oNFCwTw1z9fVm15Xj7sFY3FdtwfdiP1QjCWvHwCK88E JWC0LfnGJc3+b0Kg3B9OLyewYJ3J6bfEC1EknxssnGNyNheNGlSycMh+H7QiC8c81EfYvOdJoM4CR u8PhJUUw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwt-00050r-9L; Tue, 08 Dec 2020 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 07/11] mm: Add lock_folio_killable Date: Tue, 8 Dec 2020 19:46:49 +0000 Message-Id: <20201208194653.19180-8-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page_killable() but for use by callers who know they have a folio. Convert __lock_page_killable() to be __lock_folio_killable(). This saves one call to compound_head() per contended call to lock_page_killable(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 15 ++++++++++----- mm/filemap.c | 17 +++++++++-------- 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 1d4a1828a434..060faeb8d701 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -584,7 +584,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, } extern void __lock_folio(struct folio *folio); -extern int __lock_page_killable(struct page *page); +extern int __lock_folio_killable(struct folio *folio); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); @@ -632,6 +632,14 @@ static inline void lock_page(struct page *page) lock_folio(page_folio(page)); } +static inline int lock_folio_killable(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + return __lock_folio_killable(folio); + return 0; +} + /* * lock_page_killable is like lock_page but can be interrupted by fatal * signals. It returns 0 if it locked the page and -EINTR if it was @@ -639,10 +647,7 @@ static inline void lock_page(struct page *page) */ static inline int lock_page_killable(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - return __lock_page_killable(page); - return 0; + return lock_folio_killable(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 8e87906f5dd6..50535b21b452 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1534,14 +1534,13 @@ void __lock_folio(struct folio *folio) } EXPORT_SYMBOL(__lock_folio); -int __lock_page_killable(struct page *__page) +int __lock_folio_killable(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, EXCLUSIVE); } -EXPORT_SYMBOL_GPL(__lock_page_killable); +EXPORT_SYMBOL_GPL(__lock_folio_killable); int __lock_page_async(struct page *page, struct wait_page_queue *wait) { @@ -1562,6 +1561,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait) int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio = page_folio(page); + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released @@ -1580,13 +1581,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, if (flags & FAULT_FLAG_KILLABLE) { int ret; - ret = __lock_page_killable(page); + ret = __lock_folio_killable(folio); if (ret) { mmap_read_unlock(mm); return 0; } } else { - __lock_folio(page_folio(page)); + __lock_folio(folio); } return 1; @@ -2778,7 +2779,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(&folio->page)) { + if (__lock_folio_killable(folio)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals From patchwork Tue Dec 8 19:46:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6478BC433FE for ; Tue, 8 Dec 2020 19:47:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBC3823C17 for ; Tue, 8 Dec 2020 19:47:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBC3823C17 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D2426B0071; Tue, 8 Dec 2020 14:47:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7858C6B0072; Tue, 8 Dec 2020 14:47:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 517018D0001; Tue, 8 Dec 2020 14:47:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 2F5B26B0072 for ; Tue, 8 Dec 2020 14:47:04 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F301D1EE6 for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) X-FDA: 77571148368.13.frogs04_3a05da3273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 8983A18140B75 for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) X-HE-Tag: frogs04_3a05da3273e9 X-Filterd-Recvd-Size: 3603 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 19:47:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EWMwluwghzqMbyH7unQrphsIIg3dNu3LaF6rUw+qDGE=; b=aE6s6V1H9YrEt3+UkLCtIYZgU+ W/y4dZ9TRX/TgA1zCHDt00x0BZn+x567FJ1PsG9dKTruIgFuTYUltHDwQCg8JiQFA7KsMMWb1PRTU 0moz6DO6HcRj69g7cHI5ZpGPEtw7iIQDA0VD/Q+OhYbBZ+lvtGGxDWsIFk9GGGEY9dy/64szeOm2z WG/jNwhUuvpZw/rTFs0xCnMvD3wbgrUxwHzt/ddNIRDC5/oOrh1kOK7AL27ktUJRyhOYgHm115PqP QeierKpiLGoi7zGdXyqozaQiIsg/LzgxbaqwCLTxTK6l8SC9r4ZThst1no9l7OVbETSiyPSrKsKyo 9I9/db4g==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwt-00050y-H9; Tue, 08 Dec 2020 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 08/11] mm/filemap: Convert end_page_writeback to use a folio Date: Tue, 8 Dec 2020 19:46:50 +0000 Message-Id: <20201208194653.19180-9-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With my config, this function shrinks from 480 bytes to 240 bytes due to elimination of repeated calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 50535b21b452..f1b65f777539 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1148,11 +1148,11 @@ static void wake_up_page_bit(struct page *page, int bit_nr) spin_unlock_irqrestore(&q->lock, flags); } -static void wake_up_page(struct page *page, int bit) +static void wake_up_folio(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!FolioWaiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } /* @@ -1466,6 +1466,8 @@ EXPORT_SYMBOL(unlock_folio); */ void end_page_writeback(struct page *page) { + struct folio *folio = page_folio(page); + /* * TestClearPageReclaim could be used here but it is an atomic * operation and overkill in this particular case. Failing to @@ -1473,9 +1475,9 @@ void end_page_writeback(struct page *page) * justify taking an atomic operation penalty at the end of * ever page writeback. */ - if (PageReclaim(page)) { - ClearPageReclaim(page); - rotate_reclaimable_page(page); + if (FolioReclaim(folio)) { + ClearFolioReclaim(folio); + rotate_reclaimable_page(&folio->page); } /* @@ -1484,13 +1486,13 @@ void end_page_writeback(struct page *page) * But here we must make sure that the page is not freed and * reused before the wake_up_page(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + get_folio(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + wake_up_folio(folio, PG_writeback); + put_folio(folio); } EXPORT_SYMBOL(end_page_writeback); From patchwork Tue Dec 8 19:46:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B786C433FE for ; Tue, 8 Dec 2020 20:11:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 365B523A02 for ; Tue, 8 Dec 2020 20:11:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 365B523A02 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CB57E6B0068; Tue, 8 Dec 2020 15:11:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C67DF6B006C; Tue, 8 Dec 2020 15:11:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B54186B006E; Tue, 8 Dec 2020 15:11:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 9F03C6B0068 for ; Tue, 8 Dec 2020 15:11:58 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 691EC180AD80F for ; Tue, 8 Dec 2020 20:11:58 +0000 (UTC) X-FDA: 77571211116.12.talk79_3e1163c273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 496FA18001F97 for ; Tue, 8 Dec 2020 20:11:58 +0000 (UTC) X-HE-Tag: talk79_3e1163c273e9 X-Filterd-Recvd-Size: 6126 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 20:11:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MU5rCpKmGXhiLqucBbIqDDGkng6QmmAdASzyNay4oBY=; b=h3L9nMuBrmeDfbKNSovSzbvMqm vCY+8SmGBTxB/iIbsiwS6qSFf88CsAvLf6KHr3fVfDraoxjNIBqchmUNR8hMDhNNimZgLsEEIofD1 djcVZuq+qaa1oba8EmjHYUKO101Y0u4SSAKxDsnAY1B14ucesBDybnajOKEltH2WroV+BKIqkoPFm xW0N9nUz5Ialk/+Z4pVDJ7+Q30hBoD/JmlAI84F+SMQcALPlAZRvwWD8fitI4bxACAiId5TbhX1U3 0oJAGb5vgIvvcW8fQps91ad5bmNiobVh2sskcxxY/b1PXb8/Ytm+1fO70OAsz8fsPTO1o0PM6LHyW MI4FlqdA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwt-000515-P6; Tue, 08 Dec 2020 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 09/11] mm/filemap: Convert mapping_get_entry and pagecache_get_page to folio Date: Tue, 8 Dec 2020 19:46:51 +0000 Message-Id: <20201208194653.19180-10-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert mapping_get_entry() to return a folio and convert pagecache_get_page() to use the folio where possible. The seemingly dangerous cast of a page pointer to a folio pointer is safe because __page_cache_alloc() allocates an order-0 page, which is a folio by definition. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 45 ++++++++++++++++++++++++--------------------- 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index f1b65f777539..56ff6aa24265 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1673,33 +1673,33 @@ EXPORT_SYMBOL(page_cache_prev_miss); * @index: The page cache index. * * Looks up the page cache slot at @mapping & @offset. If there is a - * page cache page, the head page is returned with an increased refcount. + * page cache page, the folio is returned with an increased refcount. * * If the slot holds a shadow entry of a previously evicted page, or a * swap entry from shmem/tmpfs, it is returned. * - * Return: The head page or shadow entry, %NULL if nothing is found. + * Return: The folio or shadow entry, %NULL if nothing is found. */ -static struct page *mapping_get_entry(struct address_space *mapping, +static struct folio *mapping_get_entry(struct address_space *mapping, pgoff_t index) { XA_STATE(xas, &mapping->i_pages, index); - struct page *page; + struct folio *folio; rcu_read_lock(); repeat: xas_reset(&xas); - page = xas_load(&xas); - if (xas_retry(&xas, page)) + folio = xas_load(&xas); + if (xas_retry(&xas, folio)) goto repeat; /* * A shadow entry of a recently evicted page, or a swap entry from * shmem/tmpfs. Return it without attempting to raise page count. */ - if (!page || xa_is_value(page)) + if (!folio || xa_is_value(folio)) goto out; - if (!page_cache_get_speculative(page)) + if (!page_cache_get_speculative(&folio->page)) goto repeat; /* @@ -1707,14 +1707,14 @@ static struct page *mapping_get_entry(struct address_space *mapping, * This is part of the lockless pagecache protocol. See * include/linux/pagemap.h for details. */ - if (unlikely(page != xas_reload(&xas))) { - put_page(page); + if (unlikely(folio != xas_reload(&xas))) { + put_folio(folio); goto repeat; } out: rcu_read_unlock(); - return page; + return folio; } /** @@ -1754,11 +1754,13 @@ static struct page *mapping_get_entry(struct address_space *mapping, struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, int fgp_flags, gfp_t gfp_mask) { + struct folio *folio; struct page *page; repeat: - page = mapping_get_entry(mapping, index); - if (xa_is_value(page)) { + folio = mapping_get_entry(mapping, index); + page = &folio->page; + if (xa_is_value(folio)) { if (fgp_flags & FGP_ENTRY) return page; page = NULL; @@ -1768,18 +1770,18 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, if (fgp_flags & FGP_LOCK) { if (fgp_flags & FGP_NOWAIT) { - if (!trylock_page(page)) { - put_page(page); + if (!trylock_folio(folio)) { + put_folio(folio); return NULL; } } else { - lock_page(page); + lock_folio(folio); } /* Has the page been truncated? */ if (unlikely(page->mapping != mapping)) { - unlock_page(page); - put_page(page); + unlock_folio(folio); + put_folio(folio); goto repeat; } VM_BUG_ON_PAGE(!thp_contains(page, index), page); @@ -1806,17 +1808,18 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, page = __page_cache_alloc(gfp_mask); if (!page) return NULL; + folio = (struct folio *)page; if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP)))) fgp_flags |= FGP_LOCK; /* Init accessed so avoid atomic mark_page_accessed later */ if (fgp_flags & FGP_ACCESSED) - __SetPageReferenced(page); + __SetFolioReferenced(folio); err = add_to_page_cache_lru(page, mapping, index, gfp_mask); if (unlikely(err)) { - put_page(page); + put_folio(folio); page = NULL; if (err == -EEXIST) goto repeat; @@ -1827,7 +1830,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, * an unlocked page. */ if (page && (fgp_flags & FGP_FOR_MMAP)) - unlock_page(page); + unlock_folio(folio); } return page; From patchwork Tue Dec 8 19:46:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B884C4361B for ; Tue, 8 Dec 2020 20:11:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A36523A02 for ; Tue, 8 Dec 2020 20:11:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A36523A02 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CD2F6B005C; Tue, 8 Dec 2020 15:11:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97F946B005D; Tue, 8 Dec 2020 15:11:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86CE26B0068; Tue, 8 Dec 2020 15:11:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 6E21D6B005C for ; Tue, 8 Dec 2020 15:11:52 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 14AA63628 for ; Tue, 8 Dec 2020 20:11:52 +0000 (UTC) X-FDA: 77571210864.25.news48_6105ecc273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id E23281804E3A9 for ; Tue, 8 Dec 2020 20:11:51 +0000 (UTC) X-HE-Tag: news48_6105ecc273e9 X-Filterd-Recvd-Size: 8746 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 20:11:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ClESjqaT/WyKY8JgQJAVIdt0CjT/HSTWaZz8AcFWYAo=; b=o9K3XD65ndELUjgwtRI6X78kCn pgPJ2JzAu/M2M4XfJZEicfFjbBbMsExdSvcyXP9VUDROiDLkNk3hEQoxjajuDI4G6gzT3X9NcwPiv gEFmsu4+irsx2SWC0pJaLT8Mt8QsunlAwOgplqvsvm7/Fyw/2gh+SyRoeEAyolC5XjWAyhe/hQG7o MaByImw+nH8vb/UeAFKlVhUZXtBG2ft0tVkvWZoaUg+RVLG7NbeDBurtzxuW43qjqMHlK/PVXSss2 G3e1v6BvHnwxeOOFk4/2i3l7flmSVDizy6gwjPrKhPGsl6lebH2j4Y0FTA+z44QIkz6XOC8k9y5pi +hSwXGKw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwu-00051C-0T; Tue, 08 Dec 2020 19:47:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 10/11] mm/filemap: Add folio_add_to_page_cache Date: Tue, 8 Dec 2020 19:46:52 +0000 Message-Id: <20201208194653.19180-11-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages being added to the page cache should already be folios, so turn add_to_page_cache_lru() into a wrapper. Saves hundreds of bytes of text. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 13 +++++++-- mm/filemap.c | 62 ++++++++++++++++++++--------------------- 2 files changed, 41 insertions(+), 34 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 060faeb8d701..3bc56b3aa384 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -778,9 +778,9 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size) } int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); -int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); +int folio_add_to_page_cache(struct folio *folio, struct address_space *mapping, + pgoff_t index, gfp_t gfp); extern void delete_from_page_cache(struct page *page); extern void __delete_from_page_cache(struct page *page, void *shadow); int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask); @@ -805,6 +805,13 @@ static inline int add_to_page_cache(struct page *page, return error; } +static inline int add_to_page_cache_lru(struct page *page, + struct address_space *mapping, pgoff_t index, gfp_t gfp) +{ + return folio_add_to_page_cache((struct folio *)page, mapping, + index, gfp); +} + /** * struct readahead_control - Describes a readahead request. * diff --git a/mm/filemap.c b/mm/filemap.c index 56ff6aa24265..297144524f58 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -828,25 +828,25 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) } EXPORT_SYMBOL_GPL(replace_page_cache_page); -static noinline int __add_to_page_cache_locked(struct page *page, +static noinline int __add_to_page_cache_locked(struct folio *folio, struct address_space *mapping, - pgoff_t offset, gfp_t gfp, + pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, offset); - int huge = PageHuge(page); + XA_STATE(xas, &mapping->i_pages, index); + int huge = PageHuge(&folio->page); int error; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageSwapBacked(page), page); + VM_BUG_ON_PAGE(!FolioLocked(folio), &folio->page); + VM_BUG_ON_PAGE(FolioSwapBacked(folio), &folio->page); mapping_set_update(&xas, mapping); - get_page(page); - page->mapping = mapping; - page->index = offset; + get_folio(folio); + folio->page.mapping = mapping; + folio->page.index = index; - if (!huge && !page_is_secretmem(page)) { - error = mem_cgroup_charge(page, current->mm, gfp); + if (!huge && !page_is_secretmem(&folio->page)) { + error = mem_cgroup_charge(&folio->page, current->mm, gfp); if (error) goto error; } @@ -857,7 +857,7 @@ static noinline int __add_to_page_cache_locked(struct page *page, unsigned int order = xa_get_order(xas.xa, xas.xa_index); void *entry, *old = NULL; - if (order > thp_order(page)) + if (order > folio_order(folio)) xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index), order, gfp); xas_lock_irq(&xas); @@ -874,13 +874,13 @@ static noinline int __add_to_page_cache_locked(struct page *page, *shadowp = old; /* entry may have been split before we acquired lock */ order = xa_get_order(xas.xa, xas.xa_index); - if (order > thp_order(page)) { + if (order > folio_order(folio)) { xas_split(&xas, old, order); xas_reset(&xas); } } - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; @@ -890,7 +890,7 @@ static noinline int __add_to_page_cache_locked(struct page *page, /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_lruvec_page_state(page, NR_FILE_PAGES); + __inc_lruvec_page_state(&folio->page, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -900,12 +900,12 @@ static noinline int __add_to_page_cache_locked(struct page *page, goto error; } - trace_mm_filemap_add_to_page_cache(page); + trace_mm_filemap_add_to_page_cache(&folio->page); return 0; error: - page->mapping = NULL; + folio->page.mapping = NULL; /* Leave page->index set: truncation relies upon it */ - put_page(page); + put_folio(folio); return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); @@ -925,22 +925,22 @@ ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); int add_to_page_cache_locked(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + return __add_to_page_cache_locked(page_folio(page), mapping, offset, gfp_mask, NULL); } EXPORT_SYMBOL(add_to_page_cache_locked); -int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) +int folio_add_to_page_cache(struct folio *folio, struct address_space *mapping, + pgoff_t index, gfp_t gfp_mask) { void *shadow = NULL; int ret; - __SetPageLocked(page); - ret = __add_to_page_cache_locked(page, mapping, offset, + __SetFolioLocked(folio); + ret = __add_to_page_cache_locked(folio, mapping, index, gfp_mask, &shadow); if (unlikely(ret)) - __ClearPageLocked(page); + __ClearFolioLocked(folio); else { /* * The page might have been evicted from cache only @@ -950,14 +950,14 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, * data from the working set, only to cache data that will * get overwritten with something else, is a waste of memory. */ - WARN_ON_ONCE(PageActive(page)); + WARN_ON_ONCE(FolioActive(folio)); if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page, shadow); - lru_cache_add(page); + workingset_refault(&folio->page, shadow); + lru_cache_add(&folio->page); } return ret; } -EXPORT_SYMBOL_GPL(add_to_page_cache_lru); +EXPORT_SYMBOL_GPL(folio_add_to_page_cache); #ifdef CONFIG_NUMA struct page *__page_cache_alloc(gfp_t gfp) @@ -1817,7 +1817,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, if (fgp_flags & FGP_ACCESSED) __SetFolioReferenced(folio); - err = add_to_page_cache_lru(page, mapping, index, gfp_mask); + err = folio_add_to_page_cache(folio, mapping, index, gfp_mask); if (unlikely(err)) { put_folio(folio); page = NULL; @@ -1826,8 +1826,8 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, } /* - * add_to_page_cache_lru locks the page, and for mmap we expect - * an unlocked page. + * folio_add_to_page_cache locks the page, and for mmap we + * expect an unlocked page. */ if (page && (fgp_flags & FGP_FOR_MMAP)) unlock_folio(folio); From patchwork Tue Dec 8 19:46:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11959483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 689FAC4361B for ; Tue, 8 Dec 2020 20:11:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F010223A02 for ; Tue, 8 Dec 2020 20:11:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F010223A02 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 895256B005D; Tue, 8 Dec 2020 15:11:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 845CC6B0068; Tue, 8 Dec 2020 15:11:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 734856B006C; Tue, 8 Dec 2020 15:11:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 5D6886B005D for ; Tue, 8 Dec 2020 15:11:56 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 18475180AD80F for ; Tue, 8 Dec 2020 20:11:56 +0000 (UTC) X-FDA: 77571211032.29.sheep97_550a71a273e9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id EB31F180868D8 for ; Tue, 8 Dec 2020 20:11:55 +0000 (UTC) X-HE-Tag: sheep97_550a71a273e9 X-Filterd-Recvd-Size: 5550 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 20:11:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=T2NKufslsV4zHnMND/Rek7wwwX3gR2vSveo5C0g9xCM=; b=N1NRhNaSnVk8G1XUcYN1U+ABvW ztPmgw8KbK64ZoLwyIZcu50+YxkTDV2EINrc8kV972LWHAmL6B5TJ5WQWY6lZ4EacCQg8k0v1MV8d fey5ruQ58/6rzMsSq6XjNzSJgCfvQ/sMAySZdMAo6D4IWdYR+i80AfGCGlpM1cRAGXE62Spsen4KH 2N83yIBvl9/X443/Er6X+hlIs7sBqtUtOA5QWHLvMFI/NT3arW/U5mwnHoumbqcFuz0aGF6z9H6Zd RDqqlfOmSvrfmslN2HUISUlYfn/1E/UEb4/u+x04b+sPYOYWLsBDdhWz7+TH6mDZNMvvafEsgTDuN AefuqMaA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmiwu-00051J-9O; Tue, 08 Dec 2020 19:47:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [RFC PATCH 11/11] mm/swap: Convert rotate_reclaimable_page to folio Date: Tue, 8 Dec 2020 19:46:53 +0000 Message-Id: <20201208194653.19180-12-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201208194653.19180-1-willy@infradead.org> References: <20201208194653.19180-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the declaration into mm/internal.h and rename the function to rotate_reclaimable_folio(). This eliminates all five of the calls to compound_head() in this function. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 1 - mm/filemap.c | 2 +- mm/internal.h | 1 + mm/page_io.c | 4 ++-- mm/swap.c | 12 ++++++------ 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5bba15ac5a2e..5aaca35ce887 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -343,7 +343,6 @@ extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); -extern void rotate_reclaimable_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/filemap.c b/mm/filemap.c index 297144524f58..93e40e9ac357 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1477,7 +1477,7 @@ void end_page_writeback(struct page *page) */ if (FolioReclaim(folio)) { ClearFolioReclaim(folio); - rotate_reclaimable_page(&folio->page); + rotate_reclaimable_folio(folio); } /* diff --git a/mm/internal.h b/mm/internal.h index 8e9c660f33ca..f089535b5d86 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -35,6 +35,7 @@ void page_writeback_init(void); vm_fault_t do_swap_page(struct vm_fault *vmf); +void rotate_reclaimable_folio(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/page_io.c b/mm/page_io.c index 9bca17ecc4df..1fc0a579da58 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -57,7 +57,7 @@ void end_swap_bio_write(struct bio *bio) * Also print a dire warning that things will go BAD (tm) * very quickly. * - * Also clear PG_reclaim to avoid rotate_reclaimable_page() + * Also clear PG_reclaim to avoid rotate_reclaimable_folio() */ set_page_dirty(page); pr_alert("Write-error on swap-device (%u:%u:%llu)\n", @@ -341,7 +341,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc, * temporary failure if the system has limited * memory for allocating transmit buffers. * Mark the page dirty and avoid - * rotate_reclaimable_page but rate-limit the + * rotate_reclaimable_folio but rate-limit the * messages but do not flag PageError like * the normal direct-to-bio case as it could * be temporary. diff --git a/mm/swap.c b/mm/swap.c index 5022dfe388ad..9aadde8aea9b 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -241,19 +241,19 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) * reclaim. If it still appears to be reclaimable, move it to the tail of the * inactive list. * - * rotate_reclaimable_page() must disable IRQs, to prevent nasty races. + * rotate_reclaimable_folio() must disable IRQs, to prevent nasty races. */ -void rotate_reclaimable_page(struct page *page) +void rotate_reclaimable_folio(struct folio *folio) { - if (!PageLocked(page) && !PageDirty(page) && - !PageUnevictable(page) && PageLRU(page)) { + if (!FolioLocked(folio) && !FolioDirty(folio) && + !FolioUnevictable(folio) && FolioLRU(folio)) { struct pagevec *pvec; unsigned long flags; - get_page(page); + get_folio(folio); local_lock_irqsave(&lru_rotate.lock, flags); pvec = this_cpu_ptr(&lru_rotate.pvec); - if (!pagevec_add(pvec, page) || PageCompound(page)) + if (!pagevec_add(pvec, &folio->page) || FolioHead(folio)) pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); local_unlock_irqrestore(&lru_rotate.lock, flags); }