From patchwork Sat Mar 20 05:40:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF8ABC433E4 for ; Sat, 20 Mar 2021 05:42:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B52586193E for ; Sat, 20 Mar 2021 05:42:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229748AbhCTFmB (ORCPT ); Sat, 20 Mar 2021 01:42:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229583AbhCTFli (ORCPT ); Sat, 20 Mar 2021 01:41:38 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1712DC061762; Fri, 19 Mar 2021 22:41:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pn2iQp4NtMy0xorKKEZu32ELGGpc4Q/aglrDWynXG6M=; b=pU2p3ETmfB40L3ipkMkQXWcidD js2WlqlZukO/jJeR/PAT+ycIXhZWye6WUVPJmOUHiRLtxTqKOSraz8kzvsHrbZ6ZYUm9dvPW4eUHq hvwY8OsdbBhT1UbbKDkoxOhvYGxoUPk4HSl8cjL5p4s8c6yTQRqebv07ckMSARzuuvqwk5QrLtXnT o31KSouJsZdJC6EUeV9D4y77Gh8mGYjxfJmswwpVIZ3MABtGLV4LOKAZqsm4bWCOPrvWd99grp8R5 HIvHeYssYjSaZ4lArK8XQQzohOeGXxYYOon5kdj8GzryZmb20DiP0W+OwaDAD72zbvriXYXmPTtv6 MOTLKXhg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMP-005SPi-Mj; Sat, 20 Mar 2021 05:41:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 01/27] fs/cachefiles: Remove wait_bit_key layout dependency Date: Sat, 20 Mar 2021 05:40:38 +0000 Message-Id: <20210320054104.1300774-2-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Cachefiles was relying on wait_page_key and wait_bit_key being the same layout, which is fragile. Now that wait_page_key is exposed in the pagemap.h header, we can remove that fragility Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/cachefiles/rdwr.c | 7 +++---- include/linux/pagemap.h | 1 - 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index e027c718ca01..8ffc40e84a59 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -24,17 +24,16 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode, container_of(wait, struct cachefiles_one_read, monitor); struct cachefiles_object *object; struct fscache_retrieval *op = monitor->op; - struct wait_bit_key *key = _key; + struct wait_page_key *key = _key; struct page *page = wait->private; ASSERT(key); _enter("{%lu},%u,%d,{%p,%u}", monitor->netfs_page->index, mode, sync, - key->flags, key->bit_nr); + key->page, key->bit_nr); - if (key->flags != &page->flags || - key->bit_nr != PG_locked) + if (key->page != page || key->bit_nr != PG_locked) return 0; _debug("--- monitor %p %lx ---", page, page->flags); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index f68fe61c1dec..139678f382ff 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -574,7 +574,6 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, return pgoff; } -/* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */ struct wait_page_key { struct page *page; int bit_nr; From patchwork Sat Mar 20 05:40:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8B30C433DB for ; Sat, 20 Mar 2021 05:42:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8111E6199B for ; Sat, 20 Mar 2021 05:42:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229449AbhCTFmC (ORCPT ); Sat, 20 Mar 2021 01:42:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbhCTFlo (ORCPT ); Sat, 20 Mar 2021 01:41:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE3DFC061762; Fri, 19 Mar 2021 22:41:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WY9AwzDXhQrh1wUmtT12pwCP4qw29DMa0P2INwXXaHY=; b=JAmB7/OjetWwRvkK3OAosRstx8 eRwHmaZM/8w1Z6YbwSWhEOEpnO21Wmg7LRGghn9Cs14zN+15pMnaY3w6tW5SBpeNDVGpe245XjQ9X RdzxkyjumSfo6Y/0ojYN6LvDlrhpaOl89O8pXHrFaO9oRC9k1NfE4F8Ie+EWm/Qei3PmBpqMLpIiy T4zKEg7DsCpe4qdEwOPwh6lyD95v0l7UbuGj7Xs40WPJX0xWkRDg4kr9FDrGAWVjjo9lddd9hiGkC +PK126LjV5DebBj2eswNbhe/XhmKMHaclS7GQC/+RaekJM9sIivvlGLDtH888aOVEFyNRboduTu+n p9z4JQug==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMX-005SQ8-4X; Sat, 20 Mar 2021 05:41:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 02/27] mm/writeback: Add wait_on_page_writeback_killable Date: Sat, 20 Mar 2021 05:40:39 +0000 Message-Id: <20210320054104.1300774-3-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is the killable version of wait_on_page_writeback. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/pagemap.h | 1 + mm/page-writeback.c | 16 ++++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 139678f382ff..8c844ba67785 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -698,6 +698,7 @@ static inline int wait_on_page_locked_killable(struct page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); +int wait_on_page_writeback_killable(struct page *page); extern void end_page_writeback(struct page *page); void wait_for_stable_page(struct page *page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index f6c2c3165d4d..5e761fb62800 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2830,6 +2830,22 @@ void wait_on_page_writeback(struct page *page) } EXPORT_SYMBOL_GPL(wait_on_page_writeback); +/* + * Wait for a page to complete writeback. Returns -EINTR if we get a + * fatal signal while waiting. + */ +int wait_on_page_writeback_killable(struct page *page) +{ + while (PageWriteback(page)) { + trace_wait_on_page_writeback(page, page_mapping(page)); + if (wait_on_page_bit_killable(page, PG_writeback)) + return -EINTR; + } + + return 0; +} +EXPORT_SYMBOL_GPL(wait_on_page_writeback_killable); + /** * wait_for_stable_page() - wait for writeback to finish, if necessary. * @page: The page to wait on. From patchwork Sat Mar 20 05:40:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05106C433E3 for ; Sat, 20 Mar 2021 05:42:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCEAF61999 for ; Sat, 20 Mar 2021 05:42:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229791AbhCTFmC (ORCPT ); Sat, 20 Mar 2021 01:42:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229621AbhCTFls (ORCPT ); Sat, 20 Mar 2021 01:41:48 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F7FDC061762; Fri, 19 Mar 2021 22:41:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=e0Ok76DtBipIYFoxH3AU3GubyGE5PN2GtFP1WyzCpvQ=; b=eBdfloupkaEMexbkEbuEfzKhcO oBrh+QMU204rwPxpc/L90BPmil9fpoueOSXBQfzUNN5dTA+b/Y9lOQfiskB7AdKHW31yr/d8666Aq FgeYcBuTwPTHkna6LoqHp5HlwvbY1v4XKRSf7bdH/+PrI//Rc4+uafjMx+qg4gOPjfH8lphCzGeo7 5lvuX2paWAw33/NCM3Dgzi3yVxtkNRL+5u8mmd2AyW5JVW3rKPHQc4kuYEKYj1cLDzQCV5flrHD3y aeyTVSjb4ebohpTFYrpAFbJJ+5QXPFEJKagkZMrraOTcHwETCcQS4FzGvMxfOaRuP/VU2g97F9SWQ sCF/PDiA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMc-005SQR-F2; Sat, 20 Mar 2021 05:41:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 03/27] afs: Use wait_on_page_writeback_killable Date: Sat, 20 Mar 2021 05:40:40 +0000 Message-Id: <20210320054104.1300774-4-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Open-coding this function meant it missed out on the recent bugfix for waiters being woken by a delayed wake event from a previous instantiation of the page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/afs/write.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index b2e03de09c24..106a864b6a93 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -850,8 +850,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) return VM_FAULT_RETRY; #endif - if (PageWriteback(page) && - wait_on_page_bit_killable(page, PG_writeback) < 0) + if (wait_on_page_writeback_killable(page)) return VM_FAULT_RETRY; if (lock_page_killable(page) < 0) From patchwork Sat Mar 20 05:40:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 572DCC433E8 for ; Sat, 20 Mar 2021 05:42:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3098C6199B for ; Sat, 20 Mar 2021 05:42:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229818AbhCTFmD (ORCPT ); Sat, 20 Mar 2021 01:42:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbhCTFlx (ORCPT ); Sat, 20 Mar 2021 01:41:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B572C061763; Fri, 19 Mar 2021 22:41:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7i7dpiy20TKP25FraTQAvT/YVqPPPkqepGxh3hoJGwk=; b=pxLGcFecXlUmulJi2BC09lxW/R VZYE6xVdRIAJPV3kYfnprew0orCAbk294CswkmdkckVpSvlcjeDj522lcVgAAlSrlsEAeOHPz2YpW tRukWhDrVG/PXw9wc7GsirH8KixAbQ3pk0eYj5TG3kyOUGcv9wrtJN2tEzmAZ4ioJNKj1BNPauJan +DG4jhc8icpmbnO6h/3pvKpFTjR9arvrUJGE6jN4C6IX0rTFFMluCopQEWZanpMf6c+xm8EPQSXSg 7e6zmEDCoIVvuw7vA4XJN4gQckXgTtamDxgtnggVb75S+HicBqDMBbbOvCZ8T0fgUep2oA3z4CPkq MrP21kIg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMn-005SRq-9S; Sat, 20 Mar 2021 05:41:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 04/27] mm: Introduce struct folio Date: Sat, 20 Mar 2021 05:40:41 +0000 Message-Id: <20210320054104.1300774-5-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org A struct folio is a new abstraction for a head-or-single page. A function which takes a struct folio argument declares that it will operate on the entire (possibly compound) page, not just PAGE_SIZE bytes. In return, the caller guarantees that the pointer it is passing does not point to a tail page. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 78 ++++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 36 +++++++++++++++++++ 2 files changed, 114 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index cb1e191da319..9b7e3fa12fd3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -934,6 +934,20 @@ static inline unsigned int compound_order(struct page *page) return page[1].compound_order; } +/** + * folio_order - The allocation order of a folio. + * @folio: The folio. + * + * A folio is composed of 2^order pages. See get_order() for the definition + * of order. + * + * Return: The order of the folio. + */ +static inline unsigned int folio_order(struct folio *folio) +{ + return compound_order(&folio->page); +} + static inline bool hpage_pincount_available(struct page *page) { /* @@ -1579,6 +1593,69 @@ static inline void set_page_links(struct page *page, enum zone_type zone, #endif } +/** + * folio_nr_pages - The number of pages in the folio. + * @folio: The folio. + * + * Return: A number which is a power of two. + */ +static inline unsigned long folio_nr_pages(struct folio *folio) +{ + return compound_nr(&folio->page); +} + +/** + * folio_next - Move to the next physical folio. + * @folio: The folio we're currently operating on. + * + * If you have physically contiguous memory which may span more than + * one folio (eg a &struct bio_vec), use this function to move from one + * folio to the next. Do not use it if the memory is only virtually + * contiguous as the folios are almost certainly not adjacent to each + * other. This is the folio equivalent to writing ``page++``. + * + * Context: We assume that the folios are refcounted and/or locked at a + * higher level and do not adjust the reference counts. + * Return: The next struct folio. + */ +static inline struct folio *folio_next(struct folio *folio) +{ +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) + return (struct folio *)nth_page(&folio->page, folio_nr_pages(folio)); +#else + return folio + folio_nr_pages(folio); +#endif +} + +/** + * folio_shift - The number of bits covered by this folio. + * @folio: The folio. + * + * A folio contains a number of bytes which is a power-of-two in size. + * This function tells you which power-of-two the folio is. + * + * Context: The caller should have a reference on the folio to prevent + * it from being split. It is not necessary for the folio to be locked. + * Return: The base-2 logarithm of the size of this folio. + */ +static inline unsigned int folio_shift(struct folio *folio) +{ + return PAGE_SHIFT + folio_order(folio); +} + +/** + * folio_size - The number of bytes in a folio. + * @folio: The folio. + * + * Context: The caller should have a reference on the folio to prevent + * it from being split. It is not necessary for the folio to be locked. + * Return: The number of bytes in this folio. + */ +static inline size_t folio_size(struct folio *folio) +{ + return PAGE_SIZE << folio_order(folio); +} + /* * Some inline functions in vmstat.h depend on page_zone() */ @@ -1683,6 +1760,7 @@ extern void pagefault_out_of_memory(void); #define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) #define offset_in_thp(page, p) ((unsigned long)(p) & (thp_size(page) - 1)) +#define offset_in_folio(folio, p) ((unsigned long)(p) & (folio_size(folio) - 1)) /* * Flags passed to show_mem() and show_free_areas() to suppress output in diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6613b26a8894..4fc0b230d3ea 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -224,6 +224,42 @@ struct page { #endif } _struct_page_alignment; +/** + * struct folio - Represents a contiguous set of bytes. + * @page: Either a base (order-0) page or the head page of a compound page. + * + * A folio is a physically, virtually and logically contiguous set + * of bytes. It is a power-of-two in size, and it is aligned to that + * same power-of-two. If it is found in the page cache, it is at a file + * offset which is a multiple of that power-of-two. It is at least as + * large as PAGE_SIZE. + */ +struct folio { + struct page page; +}; + +/** + * page_folio - Converts from page to folio. + * @page: The page. + * + * Every page is part of a folio. This function cannot be called on a + * NULL pointer. + * + * Context: No reference, nor lock is required on @page. If the caller + * does not hold a reference, this call may race with a folio split, so + * it should re-check the folio still contains this page after gaining + * a reference on the folio. + * Return: The folio which contains this page. + */ +static inline struct folio *page_folio(struct page *page) +{ + unsigned long head = READ_ONCE(page->compound_head); + + if (unlikely(head & 1)) + return (struct folio *)(head - 1); + return (struct folio *)page; +} + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; From patchwork Sat Mar 20 05:40:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CDEFC433DB for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D90366197D for ; Sat, 20 Mar 2021 05:43:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229921AbhCTFme (ORCPT ); Sat, 20 Mar 2021 01:42:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229646AbhCTFmA (ORCPT ); Sat, 20 Mar 2021 01:42:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52237C061762; Fri, 19 Mar 2021 22:42:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KpUjZvi6Y6AV1bv/yTlf7kQsi0VFFip0qDHS1+05VZk=; b=hT15opsHDRQS3S+27H7DSOEqdf uRBLjm5zM4JcfSAAJhNs9rvf5VwrXcmY8Ik4veP3kOkQSGxhfGifXit5FUbclu2N+EVWYENwrhlPW EQMMCxTewxxRlqwKts2pdEgMywzYPliJin+6cqNAm0kZ3aHy9ex9ajmFC5q9lKXqHrLeDt7qRwKin zU7t9fvVftKxX7CwVy8vRDTTtMZ/oVRQR7Hxb4W+hfgA1p4e0r75QbbqTjFWeKJH9o5D5RGCMuwlZ V1Jg4A7nu1lv0lp2oDsCRSe60LZx2i7Lr8FKjrDcnO0R5E7kYUzu8xdB6n9N878XnI6Xp4gB22nTQ k0fqnLsg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMs-005STP-BE; Sat, 20 Mar 2021 05:41:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org, Zi Yan Subject: [PATCH v5 05/27] mm: Add folio_pgdat and folio_zone Date: Sat, 20 Mar 2021 05:40:42 +0000 Message-Id: <20210320054104.1300774-6-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These are just convenience wrappers for callers with folios; pgdat and zone can be reached from tail pages as well as head pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan --- include/linux/mm.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9b7e3fa12fd3..e176e9c9990f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1544,6 +1544,16 @@ static inline pg_data_t *page_pgdat(const struct page *page) return NODE_DATA(page_to_nid(page)); } +static inline struct zone *folio_zone(const struct folio *folio) +{ + return page_zone(&folio->page); +} + +static inline pg_data_t *folio_pgdat(const struct folio *folio) +{ + return page_pgdat(&folio->page); +} + #ifdef SECTION_IN_PAGE_FLAGS static inline void set_page_section(struct page *page, unsigned long section) { From patchwork Sat Mar 20 05:40:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30874C433E0 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 082A76193E for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229946AbhCTFmg (ORCPT ); Sat, 20 Mar 2021 01:42:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbhCTFmH (ORCPT ); Sat, 20 Mar 2021 01:42:07 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C44DC061762; Fri, 19 Mar 2021 22:42:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sKdiLShqM4pfbmiBvyCzfayGAMK3XUXmIyOzYWIGXNo=; b=YG8P3ln1vJQeY/S0Cljbd/dMK0 3prCT1rnYK/84KUpkP6ioSwPFL/NOohjafE6CcM58Hu1wmGYCv/5ijdmdo3bMtQpcMZAAKOfLmqQP it0pECz5YXR402Foor2coWCWxNOCzU46kUqPdLXUcbfVZKJ3lFWLikz6WkUPY04VisyeZWJqk6+AV R2fUCBJyaoidRQf50cJr7R8wUcVjjuybZVDEJJx/k7BAhdSEH3tQ+ht9CVeU1F3qsE91eyf9tH0le y1YyAKxn+b9ilRaGdDvkRfmdKQAvHGVz/69cPYiox9UR7ijvDtGZvteZ+tgkURelpcFexBtNzNLB6 DFoMY1hw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUMx-005SU4-EH; Sat, 20 Mar 2021 05:41:52 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 06/27] mm/vmstat: Add functions to account folio statistics Date: Sat, 20 Mar 2021 05:40:43 +0000 Message-Id: <20210320054104.1300774-7-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Allow page counters to be more readily modified by callers which have a folio. Name these wrappers with 'stat' instead of 'state' as requested by Linus here: https://lore.kernel.org/linux-mm/CAHk-=wj847SudR-kt+46fT3+xFFgiwpgThvm7DJWGdi4cVrbnQ@mail.gmail.com/ Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/vmstat.h | 107 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 3299cd69e4ca..d287d7c31b8f 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -402,6 +402,78 @@ static inline void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset) { } #endif /* CONFIG_SMP */ +static inline void __zone_stat_mod_folio(struct folio *folio, + enum zone_stat_item item, long nr) +{ + __mod_zone_page_state(folio_zone(folio), item, nr); +} + +static inline void __zone_stat_add_folio(struct folio *folio, + enum zone_stat_item item) +{ + __mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio)); +} + +static inline void __zone_stat_sub_folio(struct folio *folio, + enum zone_stat_item item) +{ + __mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio)); +} + +static inline void zone_stat_mod_folio(struct folio *folio, + enum zone_stat_item item, long nr) +{ + mod_zone_page_state(folio_zone(folio), item, nr); +} + +static inline void zone_stat_add_folio(struct folio *folio, + enum zone_stat_item item) +{ + mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio)); +} + +static inline void zone_stat_sub_folio(struct folio *folio, + enum zone_stat_item item) +{ + mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio)); +} + +static inline void __node_stat_mod_folio(struct folio *folio, + enum node_stat_item item, long nr) +{ + __mod_node_page_state(folio_pgdat(folio), item, nr); +} + +static inline void __node_stat_add_folio(struct folio *folio, + enum node_stat_item item) +{ + __mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio)); +} + +static inline void __node_stat_sub_folio(struct folio *folio, + enum node_stat_item item) +{ + __mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); +} + +static inline void node_stat_mod_folio(struct folio *folio, + enum node_stat_item item, long nr) +{ + mod_node_page_state(folio_pgdat(folio), item, nr); +} + +static inline void node_stat_add_folio(struct folio *folio, + enum node_stat_item item) +{ + mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio)); +} + +static inline void node_stat_sub_folio(struct folio *folio, + enum node_stat_item item) +{ + mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); +} + static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, int migratetype) { @@ -530,6 +602,24 @@ static inline void __dec_lruvec_page_state(struct page *page, __mod_lruvec_page_state(page, idx, -1); } +static inline void __lruvec_stat_mod_folio(struct folio *folio, + enum node_stat_item idx, int val) +{ + __mod_lruvec_page_state(&folio->page, idx, val); +} + +static inline void __lruvec_stat_add_folio(struct folio *folio, + enum node_stat_item idx) +{ + __lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio)); +} + +static inline void __lruvec_stat_sub_folio(struct folio *folio, + enum node_stat_item idx) +{ + __lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); +} + static inline void inc_lruvec_page_state(struct page *page, enum node_stat_item idx) { @@ -542,4 +632,21 @@ static inline void dec_lruvec_page_state(struct page *page, mod_lruvec_page_state(page, idx, -1); } +static inline void lruvec_stat_mod_folio(struct folio *folio, + enum node_stat_item idx, int val) +{ + mod_lruvec_page_state(&folio->page, idx, val); +} + +static inline void lruvec_stat_add_folio(struct folio *folio, + enum node_stat_item idx) +{ + lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio)); +} + +static inline void lruvec_stat_sub_folio(struct folio *folio, + enum node_stat_item idx) +{ + lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); +} #endif /* _LINUX_VMSTAT_H */ From patchwork Sat Mar 20 05:40:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 440A6C433E1 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 189CA6197D for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229952AbhCTFmg (ORCPT ); Sat, 20 Mar 2021 01:42:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbhCTFmO (ORCPT ); Sat, 20 Mar 2021 01:42:14 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9368AC061762; Fri, 19 Mar 2021 22:42:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V4CfUCmxoMh7fYAdXEYFgH1auhBTM+MLJMhUkFKBq20=; b=KWXrTPlGkKg3f1yAjPgIhRxP1K Q+M6KXpRD+WhB0WDZcGujhcL75jS88qhRwVneYektuFn1aKBj7B/Xg6rynIJyqA4EXfo6URW+qEBW ed5YIbrhOx4wLH+5dL6A0RZxxHt36rWd6pkDrSgU2alGzYINXxavIoECwPesdPxityyBsOhJPThoL rQ33scUcrF4TeT3wCH2U2qALjpZAX8WsdeedAjVMQoYTOEnchsGQWIFfsCEPybRvBbdnjz+WVTzX2 29FOcQZ/R7eEhuirKDytslYBajp7jFyyrDnVjbd+YIzcOVc/5grAVUt+0EmOcz81RAMdpsqxeTjhx G1srbhHg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUN4-005SUw-H1; Sat, 20 Mar 2021 05:41:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org, Zi Yan Subject: [PATCH v5 07/27] mm/debug: Add VM_BUG_ON_FOLIO and VM_WARN_ON_ONCE_FOLIO Date: Sat, 20 Mar 2021 05:40:44 +0000 Message-Id: <20210320054104.1300774-8-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan --- include/linux/mmdebug.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 5d0767cb424a..77d24e1dcaec 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -23,6 +23,13 @@ void dump_mm(const struct mm_struct *mm); BUG(); \ } \ } while (0) +#define VM_BUG_ON_FOLIO(cond, folio) \ + do { \ + if (unlikely(cond)) { \ + dump_page(&folio->page, "VM_BUG_ON_FOLIO(" __stringify(cond)")");\ + BUG(); \ + } \ + } while (0) #define VM_BUG_ON_VMA(cond, vma) \ do { \ if (unlikely(cond)) { \ @@ -48,6 +55,17 @@ void dump_mm(const struct mm_struct *mm); } \ unlikely(__ret_warn_once); \ }) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ + static bool __section(".data.once") __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(&folio->page, "VM_WARN_ON_ONCE_FOLIO(" __stringify(cond)")");\ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) @@ -56,11 +74,13 @@ void dump_mm(const struct mm_struct *mm); #else #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond) +#define VM_BUG_ON_FOLIO(cond, folio) VM_BUG_ON(cond) #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond) #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif From patchwork Sat Mar 20 05:40:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53369C433E3 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 299E461999 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229967AbhCTFmh (ORCPT ); Sat, 20 Mar 2021 01:42:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229905AbhCTFmY (ORCPT ); Sat, 20 Mar 2021 01:42:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACED9C061762; Fri, 19 Mar 2021 22:42:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kAv7MXv8jpNwJmIUMuWQfoif0S1mOTB4TPHRFVdLxzk=; b=GW7ZF9TgRc9llamS7mC9WcwAu4 8wIMkrdsZC55pqrSQfahfqyv2WzPQaL9fuOfuRoTIddRQu07VO5c6U5f99n6SXjPyks8zzeVoNZna 2XhlhtJTUv6ncbl53AJ56VALVj0DVyIYzOiUrLftQJ1DshVohfetNR6gWP0NuX2hYlkWcKHg8PvHU HZrioCV2vW5VOF9H/I4deAYItEmyf/norWgauIMUa2aFMlZB3Pi6r43EB5pOFnr0HrdCcpDbmjRA0 3uhOeLWPUkPgXHysVrRGgf2Deifu4Km3u7e10D0yHl6fh2Oeq1/P1Va4U0ugmAB8LxffJmL+nNFEm lua7Q65g==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNA-005SVj-8T; Sat, 20 Mar 2021 05:42:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org, Zi Yan Subject: [PATCH v5 08/27] mm: Add put_folio Date: Sat, 20 Mar 2021 05:40:45 +0000 Message-Id: <20210320054104.1300774-9-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org If we know we have a folio, we can call put_folio() instead of put_page() and save the overhead of calling compound_head(). Also skips the devmap checks. This commit looks like it should be a no-op, but actually saves 1714 bytes of text with the distro-derived config that I'm testing. Some functions grow a little while others shrink. I presume the compiler is making different inlining decisions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan --- include/linux/mm.h | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e176e9c9990f..5052479febc7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1226,9 +1226,28 @@ static inline __must_check bool try_get_page(struct page *page) return true; } +/** + * put_folio - Decrement the reference count on a folio. + * @folio: The folio. + * + * If the folio's reference count reaches zero, the memory will be + * released back to the page allocator and may be used by another + * allocation immediately. Do not access the memory or the struct folio + * after calling put_folio() unless you can be sure that it wasn't the + * last reference. + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void put_folio(struct folio *folio) +{ + if (put_page_testzero(&folio->page)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { - page = compound_head(page); + struct folio *folio = page_folio(page); /* * For devmap managed pages we need to catch refcount transition from @@ -1236,13 +1255,12 @@ static inline void put_page(struct page *page) * need to inform the device driver through callback. See * include/linux/memremap.h and HMM for details. */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); + if (page_is_devmap_managed(&folio->page)) { + put_devmap_managed_page(&folio->page); return; } - if (put_page_testzero(page)) - __put_page(page); + put_folio(folio); } /* From patchwork Sat Mar 20 05:40:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 620FBC433E2 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F7B96193E for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229990AbhCTFmh (ORCPT ); Sat, 20 Mar 2021 01:42:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229913AbhCTFm1 (ORCPT ); Sat, 20 Mar 2021 01:42:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2F45C061762; Fri, 19 Mar 2021 22:42:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CaovRBaNkCVTqsYVernAM0LyxVJ2htiMQIZKuC/uIxU=; b=geYK0WL7vhEsxuUgyus013JvCI KdytovtU1KhVoFETd/G89E7koQ1cLjPXplAnZ2o2AImG+gIc4/Hp1veUSI9R+QxjO7T3YeILuaDNZ rOy3jB1cTcnRkWI57Da5itlUWLdYJ3gDz+7moeOpVPb4xB5oXCFeoSNU38RI6rTHeqnFx+fFeEDX3 8Z7uKIr+Jhnn7NSpt2b7ad66pVPDveEWZkhHKxRVebRq82lPyXjzqwhHM82n6Rk+D0lFAVbna4UFo 0VVmm8BS+MVs2Et6d7tez3AhvRCJ/eLepQ4xvkQibhkYyuq/guBciN6/O23haSJSZ9LHhpuEgPUK4 eiJANc+w==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNF-005SWa-Td; Sat, 20 Mar 2021 05:42:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org, Zi Yan Subject: [PATCH v5 09/27] mm: Add get_folio Date: Sat, 20 Mar 2021 05:40:46 +0000 Message-Id: <20210320054104.1300774-10-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org If we know we have a folio, we can call get_folio() instead of get_page() and save the overhead of calling compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan --- include/linux/mm.h | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5052479febc7..8fc7b04a1438 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1198,18 +1198,26 @@ static inline bool is_pci_p2pdma_page(const struct page *page) } /* 127: arbitrary random number, small enough to assemble well */ -#define page_ref_zero_or_close_to_overflow(page) \ - ((unsigned int) page_ref_count(page) + 127u <= 127u) +#define folio_ref_zero_or_close_to_overflow(folio) \ + ((unsigned int) page_ref_count(&folio->page) + 127u <= 127u) + +/** + * get_folio - Increment the reference count on a folio. + * @folio: The folio. + * + * Context: May be called in any context, as long as you know that + * you have a refcount on the folio. If you do not already have one, + * try_grab_page() may be the right interface for you to use. + */ +static inline void get_folio(struct folio *folio) +{ + VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio); + page_ref_inc(&folio->page); +} static inline void get_page(struct page *page) { - page = compound_head(page); - /* - * Getting a normal page or the head of a compound page - * requires to already have an elevated page->_refcount. - */ - VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); - page_ref_inc(page); + get_folio(page_folio(page)); } bool __must_check try_grab_page(struct page *page, unsigned int flags); From patchwork Sat Mar 20 05:40:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8BBBC433EC for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0ECF6199B for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230045AbhCTFnF (ORCPT ); Sat, 20 Mar 2021 01:43:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229994AbhCTFmj (ORCPT ); Sat, 20 Mar 2021 01:42:39 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F470C061764; Fri, 19 Mar 2021 22:42:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ljTK+/ImEx7+PulIziA1TOx2QFhaPqhNvcjig85b3a8=; b=l6deDx0Vmeya/gFYo7TLrvGdAr 4doiyPfhQC0+zaT041tMEt2+/qPoH771V/NLA63pLyi5jlUDHLh6BGGKCVniDpOvZPiNwr+ylBCaV qzx2u2ZGg63cyliyOkMOAc4B0+GM+e9o996sHOEE4MYMatNiFhjwaC5PUkN0QEiEA7/gM0RTcmDEp 3wenpLbdG2ubSvlugkpd4gQ/tAf9NXDMZOJwsm4li2Ro75ZhW59IK/14g/gTvZjAUCjzgIfiijqXX nH4svSmQV/t8WxIh5wtPzk8zzxYjpbCt3BbUU8hYxZ7Rn48uYf3GktdGLrXc2R4NyU3SeBvLnCkjM vETsvu2A==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNQ-005SXN-9w; Sat, 20 Mar 2021 05:42:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 10/27] mm: Create FolioFlags Date: Sat, 20 Mar 2021 05:40:47 +0000 Message-Id: <20210320054104.1300774-11-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These new functions are the folio analogues of the PageFlags functions. If CONFIG_DEBUG_VM_PGFLAGS is enabled, we check the folio is not a tail page at every invocation. Note that this will also catch the PagePoisoned case as a poisoned page has every bit set, which would include PageTail. This saves 1727 bytes of text with the distro-derived config that I'm testing due to removing a double call to compound_head() in PageSwapCache(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/page-flags.h | 120 ++++++++++++++++++++++++++++++------- 1 file changed, 100 insertions(+), 20 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 04a34c08e0a6..ec0e3eb6b85a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -212,6 +212,15 @@ static inline void page_init_poison(struct page *page, size_t size) } #endif +static unsigned long *folio_flags(struct folio *folio, unsigned n) +{ + struct page *page = &folio->page; + + VM_BUG_ON_PGFLAGS(PageTail(page), page); + VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page); + return &page[n].flags; +} + /* * Page flags policies wrt compound pages * @@ -256,34 +265,56 @@ static inline void page_init_poison(struct page *page, size_t size) VM_BUG_ON_PGFLAGS(!PageHead(page), page); \ PF_POISONED_CHECK(&page[1]); }) +/* Which page is the flag stored in */ +#define FOLIO_PF_ANY 0 +#define FOLIO_PF_HEAD 0 +#define FOLIO_PF_ONLY_HEAD 0 +#define FOLIO_PF_NO_TAIL 0 +#define FOLIO_PF_NO_COMPOUND 0 +#define FOLIO_PF_SECOND 1 + /* * Macros to create function definitions for page flags */ #define TESTPAGEFLAG(uname, lname, policy) \ +static __always_inline int Folio##uname(struct folio *folio) \ + { return test_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int Page##uname(struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } #define SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void SetFolio##uname(struct folio *folio) \ + { set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void SetPage##uname(struct page *page) \ { set_bit(PG_##lname, &policy(page, 1)->flags); } #define CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void ClearFolio##uname(struct folio *folio) \ + { clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void ClearPage##uname(struct page *page) \ { clear_bit(PG_##lname, &policy(page, 1)->flags); } #define __SETPAGEFLAG(uname, lname, policy) \ +static __always_inline void __SetFolio##uname(struct folio *folio) \ + { __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void __SetPage##uname(struct page *page) \ { __set_bit(PG_##lname, &policy(page, 1)->flags); } #define __CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline void __ClearFolio##uname(struct folio *folio) \ + { __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void __ClearPage##uname(struct page *page) \ { __clear_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTSETFLAG(uname, lname, policy) \ +static __always_inline int TestSetFolio##uname(struct folio *folio) \ + { return test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int TestSetPage##uname(struct page *page) \ { return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTCLEARFLAG(uname, lname, policy) \ +static __always_inline int TestClearFolio##uname(struct folio *folio) \ + { return test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int TestClearPage##uname(struct page *page) \ { return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } @@ -302,21 +333,27 @@ static __always_inline int TestClearPage##uname(struct page *page) \ TESTCLEARFLAG(uname, lname, policy) #define TESTPAGEFLAG_FALSE(uname) \ +static inline int Folio##uname(const struct folio *folio) { return 0; } \ static inline int Page##uname(const struct page *page) { return 0; } #define SETPAGEFLAG_NOOP(uname) \ +static inline void SetFolio##uname(struct folio *folio) { } \ static inline void SetPage##uname(struct page *page) { } #define CLEARPAGEFLAG_NOOP(uname) \ +static inline void ClearFolio##uname(struct folio *folio) { } \ static inline void ClearPage##uname(struct page *page) { } #define __CLEARPAGEFLAG_NOOP(uname) \ +static inline void __ClearFolio##uname(struct folio *folio) { } \ static inline void __ClearPage##uname(struct page *page) { } #define TESTSETFLAG_FALSE(uname) \ +static inline int TestSetFolio##uname(struct folio *folio) { return 0; } \ static inline int TestSetPage##uname(struct page *page) { return 0; } #define TESTCLEARFLAG_FALSE(uname) \ +static inline int TestClearFolio##uname(struct folio *folio) { return 0; } \ static inline int TestClearPage##uname(struct page *page) { return 0; } #define PAGEFLAG_FALSE(uname) TESTPAGEFLAG_FALSE(uname) \ @@ -393,14 +430,18 @@ PAGEFLAG_FALSE(HighMem) #endif #ifdef CONFIG_SWAP -static __always_inline int PageSwapCache(struct page *page) +static __always_inline bool FolioSwapCache(struct folio *folio) { -#ifdef CONFIG_THP_SWAP - page = compound_head(page); -#endif - return PageSwapBacked(page) && test_bit(PG_swapcache, &page->flags); + return FolioSwapBacked(folio) && + test_bit(PG_swapcache, folio_flags(folio, 0)); + +} +static __always_inline bool PageSwapCache(struct page *page) +{ + return FolioSwapCache(page_folio(page)); } + SETPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) CLEARPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) #else @@ -478,10 +519,14 @@ static __always_inline int PageMappingFlags(struct page *page) return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } -static __always_inline int PageAnon(struct page *page) +static __always_inline bool FolioAnon(struct folio *folio) { - page = compound_head(page); - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return ((unsigned long)folio->page.mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline bool PageAnon(struct page *page) +{ + return FolioAnon(page_folio(page)); } static __always_inline int __PageMovable(struct page *page) @@ -509,18 +554,16 @@ TESTPAGEFLAG_FALSE(Ksm) u64 stable_page_flags(struct page *page); -static inline int PageUptodate(struct page *page) +static inline int FolioUptodate(struct folio *folio) { - int ret; - page = compound_head(page); - ret = test_bit(PG_uptodate, &(page)->flags); + int ret = test_bit(PG_uptodate, folio_flags(folio, 0)); /* * Must ensure that the data we read out of the page is loaded * _after_ we've loaded page->flags to check for PageUptodate. * We can skip the barrier if the page is not uptodate, because * we wouldn't be reading anything from it. * - * See SetPageUptodate() for the other side of the story. + * See SetFolioUptodate() for the other side of the story. */ if (ret) smp_rmb(); @@ -528,23 +571,36 @@ static inline int PageUptodate(struct page *page) return ret; } -static __always_inline void __SetPageUptodate(struct page *page) +static inline int PageUptodate(struct page *page) +{ + return FolioUptodate(page_folio(page)); +} + +static __always_inline void __SetFolioUptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); smp_wmb(); - __set_bit(PG_uptodate, &page->flags); + __set_bit(PG_uptodate, folio_flags(folio, 0)); } -static __always_inline void SetPageUptodate(struct page *page) +static __always_inline void SetFolioUptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); /* * Memory barrier must be issued before setting the PG_uptodate bit, * so that all previous stores issued in order to bring the page * uptodate are actually visible before PageUptodate becomes true. */ smp_wmb(); - set_bit(PG_uptodate, &page->flags); + set_bit(PG_uptodate, folio_flags(folio, 0)); +} + +static __always_inline void __SetPageUptodate(struct page *page) +{ + __SetFolioUptodate((struct folio *)page); +} + +static __always_inline void SetPageUptodate(struct page *page) +{ + SetFolioUptodate((struct folio *)page); } CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) @@ -569,6 +625,17 @@ static inline void set_page_writeback_keepwrite(struct page *page) __PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY) +/* Whether there are one or multiple pages in a folio */ +static inline bool FolioSingle(struct folio *folio) +{ + return !FolioHead(folio); +} + +static inline bool FolioMulti(struct folio *folio) +{ + return FolioHead(folio); +} + static __always_inline void set_compound_head(struct page *page, struct page *head) { WRITE_ONCE(page->compound_head, (unsigned long)head + 1); @@ -592,12 +659,15 @@ static inline void ClearPageCompound(struct page *page) #ifdef CONFIG_HUGETLB_PAGE int PageHuge(struct page *page); int PageHeadHuge(struct page *page); +static inline bool FolioHuge(struct folio *folio) +{ + return PageHeadHuge(&folio->page); +} #else TESTPAGEFLAG_FALSE(Huge) TESTPAGEFLAG_FALSE(HeadHuge) #endif - #ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * PageHuge() only returns true for hugetlbfs pages, but not for @@ -613,6 +683,11 @@ static inline int PageTransHuge(struct page *page) return PageHead(page); } +static inline bool FolioTransHuge(struct folio *folio) +{ + return FolioHead(folio); +} + /* * PageTransCompound returns true for both transparent huge pages * and hugetlbfs pages, so it should only be called when it's known @@ -844,6 +919,11 @@ static inline int page_has_private(struct page *page) return !!(page->flags & PAGE_FLAGS_PRIVATE); } +static inline bool folio_has_private(struct folio *folio) +{ + return page_has_private(&folio->page); +} + #undef PF_ANY #undef PF_HEAD #undef PF_ONLY_HEAD From patchwork Sat Mar 20 05:40:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6059C433E9 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B057261999 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230053AbhCTFnG (ORCPT ); Sat, 20 Mar 2021 01:43:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbhCTFmh (ORCPT ); Sat, 20 Mar 2021 01:42:37 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2301BC061762; Fri, 19 Mar 2021 22:42:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IP6QZgGwEXaaFopeekBczSGdfJmZJVj0NuzgQhtzlmA=; b=cMOvj2OzXkJGXJg/q/R+ZqybO2 trLn8Yg+taRequdAFhoxcXYNK/9teYoprdMTru8pqwzM9g8uSYXbL9D1nj33IkOHDAB4/0ajLNZ1F ZXLL6e+S4vIz5lisbxuBwXh6dHzt54g2p2wEZGHkoDYf8sfpkU0nDaNVheh3ktLS1wo2SycgHVsW7 6cNjq5pz2mgAANBgEuw3u2GMuD5ChgX6paFd3VlDScnH7G+8TU7pyipGBrCYOEbhqlhjLFFSlJ9FI dKRkduoNnWOtlU6pkBuJDpvnQKWNnX9gz7uKVtLzj7TvMH37y5XWAEjAJ4olzXv0cZ9G4BhV7LvYt rudyYfBg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNU-005SY1-G5; Sat, 20 Mar 2021 05:42:25 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 11/27] mm: Handle per-folio private data Date: Sat, 20 Mar 2021 05:40:48 +0000 Message-Id: <20210320054104.1300774-12-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add folio_private() and set_folio_private() which mirror page_private() and set_page_private() -- ie folio private data is the same as page private data. The only difference is that these return a void * instead of an unsigned long, which matches the majority of users. Turn attach_page_private() into attach_folio_private() and reimplement attach_page_private() as a wrapper. No filesystem which uses page private data currently supports compound pages, so we're free to define the rules. attach_page_private() may only be called on a head page; if you want to add private data to a tail page, you can call set_page_private() directly (and shouldn't increment the page refcount! That should be done when adding private data to the head page / folio). This saves 597 bytes of text with the distro-derived config that I'm testing due to removing the calls to compound_head() in get_page() & put_page(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 16 ++++++++++++++ include/linux/pagemap.h | 48 ++++++++++++++++++++++++---------------- 2 files changed, 45 insertions(+), 19 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4fc0b230d3ea..90086f93e9de 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -278,6 +278,12 @@ static inline atomic_t *compound_pincount_ptr(struct page *page) #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +/* + * page_private can be used on tail pages. However, PagePrivate is only + * checked by the VM on the head page. So page_private on the tail pages + * should be used for data that's ancillary to the head page (eg attaching + * buffer heads to tail pages after attaching buffer heads to the head page) + */ #define page_private(page) ((page)->private) static inline void set_page_private(struct page *page, unsigned long private) @@ -285,6 +291,16 @@ static inline void set_page_private(struct page *page, unsigned long private) page->private = private; } +static inline void *folio_private(struct folio *folio) +{ + return (void *)folio->page.private; +} + +static inline void set_folio_private(struct folio *folio, void *v) +{ + folio->page.private = (unsigned long)v; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8c844ba67785..6676210addf6 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -260,42 +260,52 @@ static inline int page_cache_add_speculative(struct page *page, int count) } /** - * attach_page_private - Attach private data to a page. - * @page: Page to attach data to. - * @data: Data to attach to page. + * attach_folio_private - Attach private data to a folio. + * @folio: Folio to attach data to. + * @data: Data to attach to folio. * - * Attaching private data to a page increments the page's reference count. - * The data must be detached before the page will be freed. + * Attaching private data to a folio increments the page's reference count. + * The data must be detached before the folio will be freed. */ -static inline void attach_page_private(struct page *page, void *data) +static inline void attach_folio_private(struct folio *folio, void *data) { - get_page(page); - set_page_private(page, (unsigned long)data); - SetPagePrivate(page); + get_folio(folio); + set_folio_private(folio, data); + SetFolioPrivate(folio); } /** - * detach_page_private - Detach private data from a page. - * @page: Page to detach data from. + * detach_folio_private - Detach private data from a folio. + * @folio: Folio to detach data from. * - * Removes the data that was previously attached to the page and decrements + * Removes the data that was previously attached to the folio and decrements * the refcount on the page. * - * Return: Data that was attached to the page. + * Return: Data that was attached to the folio. */ -static inline void *detach_page_private(struct page *page) +static inline void *detach_folio_private(struct folio *folio) { - void *data = (void *)page_private(page); + void *data = folio_private(folio); - if (!PagePrivate(page)) + if (!FolioPrivate(folio)) return NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - put_page(page); + ClearFolioPrivate(folio); + set_folio_private(folio, NULL); + put_folio(folio); return data; } +static inline void attach_page_private(struct page *page, void *data) +{ + attach_folio_private(page_folio(page), data); +} + +static inline void *detach_page_private(struct page *page) +{ + return detach_folio_private(page_folio(page)); +} + #ifdef CONFIG_NUMA extern struct page *__page_cache_alloc(gfp_t gfp); #else From patchwork Sat Mar 20 05:40:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06052C433EB for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E395561999 for ; Sat, 20 Mar 2021 05:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230060AbhCTFnG (ORCPT ); Sat, 20 Mar 2021 01:43:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230010AbhCTFmk (ORCPT ); Sat, 20 Mar 2021 01:42:40 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D601C061762; Fri, 19 Mar 2021 22:42:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+y/5FoFjLrWI2aA5leeXHbMd9T+liO6cqHRjVQ4zX7g=; b=tHN7vv86r++fKO7VbcELKmOtYj ytR/rlYj8k7h/P9xrcgHzXnWPUDm5Ur3vjgSlvyZDOIr2QqLcAcphcplksUqNLr+tejLqmVTWUHHn OB9xjs9rBnoxFW8V54cvyqv4gsTuHN8nzcdEr23gIdzMr7w7WQsB+R+S0kN5v7qiwI93Y2NONe3Qh KuqS/z0C/osOFqLoKYgruZra5PVGK0PwBSJH47wIxx22rGRHJj12r5Kjh/RUKYbtPZimtbzrQKRa+ Zd+ZyfdkekkpHbQctm466QxQK7znWnYCkV4MwGjUNPZ3dLj4/Sx8hEfH+IitTe+lMv1z6xvFI7ZeN JtNleeyA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNW-005SYb-Sg; Sat, 20 Mar 2021 05:42:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 12/27] mm: Add folio_index, folio_file_page and folio_contains Date: Sat, 20 Mar 2021 05:40:49 +0000 Message-Id: <20210320054104.1300774-13-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org folio_index() is the equivalent of page_index() for folios. folio_file_page() is the equivalent of find_subpage(). folio_contains() is the equivalent of thp_contains(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 53 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6676210addf6..f29c96ed3721 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -462,6 +462,59 @@ static inline bool thp_contains(struct page *head, pgoff_t index) return page_index(head) == (index & ~(thp_nr_pages(head) - 1UL)); } +#define swapcache_index(folio) __page_file_index(&(folio)->page) + +/** + * folio_index - File index of a folio. + * @folio: The folio. + * + * For a folio which is either in the page cache or the swap cache, + * return its index within the address_space it belongs to. If you know + * the page is definitely in the page cache, you can look at the folio's + * index directly. + * + * Return: The index (offset in units of pages) of a folio in its file. + */ +static inline pgoff_t folio_index(struct folio *folio) +{ + if (unlikely(FolioSwapCache(folio))) + return swapcache_index(folio); + return folio->page.index; +} + +/** + * folio_file_page - The page for a particular index. + * @folio: The folio which contains this index. + * @index: The index we want to look up. + * + * Sometimes after looking up a folio in the page cache, we need to + * obtain the specific page for an index (eg a page fault). + * + * Return: The page containing the file data for this index. + */ +static inline struct page *folio_file_page(struct folio *folio, pgoff_t index) +{ + return &folio->page + (index & (folio_nr_pages(folio) - 1)); +} + +/** + * folio_contains - Does this folio contain this index? + * @folio: The folio. + * @index: The page index within the file. + * + * Context: The caller should have the page locked in order to prevent + * (eg) shmem from moving the page between the page cache and swap cache + * and changing its index in the middle of the operation. + * Return: true or false. + */ +static inline bool folio_contains(struct folio *folio, pgoff_t index) +{ + /* HugeTLBfs indexes the page cache in units of hpage_size */ + if (PageHuge(&folio->page)) + return folio->page.index == index; + return index - folio_index(folio) < folio_nr_pages(folio); +} + /* * Given the page we found in the page cache, return the page corresponding * to this index in the file From patchwork Sat Mar 20 05:40:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A408C433F2 for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1844C61999 for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230080AbhCTFnI (ORCPT ); Sat, 20 Mar 2021 01:43:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230039AbhCTFm5 (ORCPT ); Sat, 20 Mar 2021 01:42:57 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 125A9C061762; Fri, 19 Mar 2021 22:42:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TogcGtiunGQHXTWVGfbYtEEeOKlg2CcfwBbKvI/9Sic=; b=TwajhSYg65zC2pd2McZvStKm+s eyhJqJEbPH4ESxUnlr2gr41CHw5au0E03GShLLhP+UAmCT7dOPN2EOiqj/9zGdTuMAGrAOOxROYFV zwX3HXWbfflS8XC4A/QAhcl15ZPNCRcA6k6Ja0FrlB37qbiPEpgLA81AKlzNHc7hVC9fSBOFg0H3J UADdccQYE9elg3BhahAhx9UOF0bCy7HoGvjr0U5Kjx+UsAd6hN3Vd3HpNH664QLhQ2N5+1Un+vR7v BuQ/fbfUdLo5G40VZsHjv+Buf4BEMr7rh06ogntOG+zLK047XTD5wblabyhl3rdw3032DXY0VV//z ofKautGg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNf-005SZ2-7z; Sat, 20 Mar 2021 05:42:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 13/27] mm/util: Add folio_mapping and folio_file_mapping Date: Sat, 20 Mar 2021 05:40:50 +0000 Message-Id: <20210320054104.1300774-14-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These are the folio equivalent of page_mapping() and page_file_mapping(). Add an out-of-line page_mapping() wrapper around folio_mapping() in order to prevent the page_folio() call from bloating every caller of page_mapping(). Adjust page_file_mapping() and page_mapping_file() to use folios internally. Rename __page_file_mapping() to swapcache_mapping() and change it to take a folio. This ends up saving 186 bytes of text overall. folio_mapping() is 45 bytes shorter than page_mapping() was, but the new page_mapping() wrapper is 30 bytes. The major reduction is a few bytes less in dozens of nfs functions (which call page_file_mapping()). Most of these appear to be a slight change in gcc's register allocation decisions, which allow: 48 8b 56 08 mov 0x8(%rsi),%rdx 48 8d 42 ff lea -0x1(%rdx),%rax 83 e2 01 and $0x1,%edx 48 0f 44 c6 cmove %rsi,%rax to become: 48 8b 46 08 mov 0x8(%rsi),%rax 48 8d 78 ff lea -0x1(%rax),%rdi a8 01 test $0x1,%al 48 0f 44 fe cmove %rsi,%rdi for a reduction of a single byte. Once the NFS client is converted to use folios, this entire sequence will disappear. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 14 -------------- include/linux/pagemap.h | 35 +++++++++++++++++++++++++++++++++-- include/linux/swap.h | 6 ++++++ mm/Makefile | 2 +- mm/folio-compat.c | 13 +++++++++++++ mm/swapfile.c | 8 ++++---- mm/util.c | 30 ++++++++++++++++++------------ 7 files changed, 75 insertions(+), 33 deletions(-) create mode 100644 mm/folio-compat.c diff --git a/include/linux/mm.h b/include/linux/mm.h index 8fc7b04a1438..bc626c19f9f8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1732,19 +1732,6 @@ void page_address_init(void); extern void *page_rmapping(struct page *page); extern struct anon_vma *page_anon_vma(struct page *page); -extern struct address_space *page_mapping(struct page *page); - -extern struct address_space *__page_file_mapping(struct page *); - -static inline -struct address_space *page_file_mapping(struct page *page) -{ - if (unlikely(PageSwapCache(page))) - return __page_file_mapping(page); - - return page->mapping; -} - extern pgoff_t __page_file_index(struct page *page); /* @@ -1759,7 +1746,6 @@ static inline pgoff_t page_index(struct page *page) } bool page_mapped(struct page *page); -struct address_space *page_mapping(struct page *page); /* * Return true only if the page has been allocated with diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index f29c96ed3721..90e970f48039 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -162,14 +162,45 @@ static inline void filemap_nr_thps_dec(struct address_space *mapping) void release_pages(struct page **pages, int nr); +struct address_space *page_mapping(struct page *); +struct address_space *folio_mapping(struct folio *); +struct address_space *swapcache_mapping(struct folio *); + +/** + * folio_file_mapping - Find the mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Folios in the swap cache return the mapping of the + * swap file or swap device where the data is stored. This is different + * from the mapping returned by folio_mapping(). The only reason to + * use it is if, like NFS, you return 0 from ->activate_swapfile. + * + * Do not call this for folios which aren't in the page cache or swap cache. + */ +static inline struct address_space *folio_file_mapping(struct folio *folio) +{ + if (unlikely(FolioSwapCache(folio))) + return swapcache_mapping(folio); + + return folio->page.mapping; +} + +static inline struct address_space *page_file_mapping(struct page *page) +{ + return folio_file_mapping(page_folio(page)); +} + /* * For file cache pages, return the address_space, otherwise return NULL */ static inline struct address_space *page_mapping_file(struct page *page) { - if (unlikely(PageSwapCache(page))) + struct folio *folio = page_folio(page); + + if (unlikely(FolioSwapCache(folio))) return NULL; - return page_mapping(page); + return folio_mapping(folio); } /* diff --git a/include/linux/swap.h b/include/linux/swap.h index 4c3a844ac9b4..09316a5c33e9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -314,6 +314,12 @@ struct vma_swap_readahead { #endif }; +static inline swp_entry_t folio_swap_entry(struct folio *folio) +{ + swp_entry_t entry = { .val = page_private(&folio->page) }; + return entry; +} + /* linux/mm/workingset.c */ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); diff --git a/mm/Makefile b/mm/Makefile index 788c5ce5c0ef..9d6a7e8b5a3c 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -46,7 +46,7 @@ mmu-$(CONFIG_MMU) += process_vm_access.o endif obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ - maccess.o page-writeback.o \ + maccess.o page-writeback.o folio-compat.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ diff --git a/mm/folio-compat.c b/mm/folio-compat.c new file mode 100644 index 000000000000..5e107aa30a62 --- /dev/null +++ b/mm/folio-compat.c @@ -0,0 +1,13 @@ +/* + * Compatibility functions which bloat the callers too much to make inline. + * All of the callers of these functions should be converted to use folios + * eventually. + */ + +#include + +struct address_space *page_mapping(struct page *page) +{ + return folio_mapping(page_folio(page)); +} +EXPORT_SYMBOL(page_mapping); diff --git a/mm/swapfile.c b/mm/swapfile.c index 149e77454e3c..d0ee24239a83 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3533,13 +3533,13 @@ struct swap_info_struct *page_swap_info(struct page *page) } /* - * out-of-line __page_file_ methods to avoid include hell. + * out-of-line methods to avoid include hell. */ -struct address_space *__page_file_mapping(struct page *page) +struct address_space *swapcache_mapping(struct folio *folio) { - return page_swap_info(page)->swap_file->f_mapping; + return page_swap_info(&folio->page)->swap_file->f_mapping; } -EXPORT_SYMBOL_GPL(__page_file_mapping); +EXPORT_SYMBOL_GPL(swapcache_mapping); pgoff_t __page_file_index(struct page *page) { diff --git a/mm/util.c b/mm/util.c index 0b6dd9d81da7..c4ed5b919c7d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -686,30 +686,36 @@ struct anon_vma *page_anon_vma(struct page *page) return __page_rmapping(page); } -struct address_space *page_mapping(struct page *page) +/** + * folio_mapping - Find the mapping where this folio is stored. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Folios in the swap cache return the swap mapping + * this page is stored in (which is different from the mapping for the + * swap file or swap device where the data is stored). + * + * You can call this for folios which aren't in the swap cache or page + * cache and it will return NULL. + */ +struct address_space *folio_mapping(struct folio *folio) { struct address_space *mapping; - page = compound_head(page); - /* This happens if someone calls flush_dcache_page on slab page */ - if (unlikely(PageSlab(page))) + if (unlikely(FolioSlab(folio))) return NULL; - if (unlikely(PageSwapCache(page))) { - swp_entry_t entry; - - entry.val = page_private(page); - return swap_address_space(entry); - } + if (unlikely(FolioSwapCache(folio))) + return swap_address_space(folio_swap_entry(folio)); - mapping = page->mapping; + mapping = folio->page.mapping; if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } -EXPORT_SYMBOL(page_mapping); +EXPORT_SYMBOL(folio_mapping); /* Slow path of page_mapcount() for compound pages */ int __page_mapcount(struct page *page) From patchwork Sat Mar 20 05:40:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FBF7C433ED for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02F4F6193E for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230070AbhCTFnH (ORCPT ); Sat, 20 Mar 2021 01:43:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230034AbhCTFmx (ORCPT ); Sat, 20 Mar 2021 01:42:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB3A1C061762; Fri, 19 Mar 2021 22:42:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dfXfv8hmWSWW+Hl8p1PfiLq67NsbQA2iV31c9ncsY7U=; b=npwd9Hjq07XhTywsV1Cu9F4yBh o6W/94AumtMQKBsuMgO7/OolWgBQyOz/yqWReqmjZ8SUkSQsCwkaawF+G69uzu0YE4eAj17MCHqI8 +OF+OX09t0oGUZMr1b/UuTHUHRtJUHOgqpD4jDT4l26D+Eh8tqA6SiCIW5LNbXIF3m70laXAOH5hR q8poJiTOcpkyKrRp+hsMEQn02tgCALGTut3pygxAjl6qAR1iicjckl+iiIeI8qg1zAzk1UUo1OOar yPVkQRqi+vH1BI6k9f0dPY+nU4OEYkdIP4zOYnb+l5lQmECZ3FFNewuwrUeQ4uEaUDiGm+AyJi/vi kxPiIL0A==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNi-005SaJ-Aa; Sat, 20 Mar 2021 05:42:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 14/27] mm/memcg: Add folio wrappers for various functions Date: Sat, 20 Mar 2021 05:40:51 +0000 Message-Id: <20210320054104.1300774-15-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add new wrapper functions folio_memcg(), lock_folio_memcg(), unlock_folio_memcg() and mem_cgroup_folio_lruvec(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4064c9dda534..493136f495b6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -397,6 +397,11 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + return page_memcg(&folio->page); +} + /* * page_memcg_rcu - locklessly get the memory cgroup associated with a page * @page: a pointer to the page struct @@ -1400,6 +1405,22 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +static inline void lock_folio_memcg(struct folio *folio) +{ + lock_page_memcg(&folio->page); +} + +static inline void unlock_folio_memcg(struct folio *folio) +{ + unlock_page_memcg(&folio->page); +} + +static inline struct lruvec *mem_cgroup_folio_lruvec(struct folio *folio, + struct pglist_data *pgdat) +{ + return mem_cgroup_page_lruvec(&folio->page, pgdat); +} + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); From patchwork Sat Mar 20 05:40:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49B13C433F7 for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A7B261999 for ; Sat, 20 Mar 2021 05:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230092AbhCTFnJ (ORCPT ); Sat, 20 Mar 2021 01:43:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbhCTFnC (ORCPT ); Sat, 20 Mar 2021 01:43:02 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D04F3C061762; Fri, 19 Mar 2021 22:43:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A+2/OS9003/aHwRRfvYeOI3a09/9GdRT/Dat6O4B5lM=; b=G5I7cnSJJVzHmECcOKtVrGLyVB HKHCO1NawwFMZ2FKC0AtGPTUDAD0NXqX2t6a6lGxJ6Pi9JZpkgoNxIdje2tuqIDX9iRgfnV8tjLBF /BLZgxZp2XEei/WLIJFx+guINjiTDDZEukEWDfDCyeGSVu0yw4cczCmLzziOxTIZbOEFame9K9pn4 zeWCoH/GjF+aHtb05Xf5psY+sF1MKR1rNK9UL0GPvqnNXsNNXgR/dn9Bs8THNfqELBxyyp4CJJC67 PoQAdiBkiiahbxVsyB43UTX+IGLm47LoImBa4SCnIX/h3offl3ZYvHbW4DxdbBvhZPHJgt1Zxkp20 IVgd2Bng==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNk-005Sal-IJ; Sat, 20 Mar 2021 05:42:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 15/27] mm/filemap: Add unlock_folio Date: Sat, 20 Mar 2021 05:40:52 +0000 Message-Id: <20210320054104.1300774-16-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Convert unlock_page() to call unlock_folio(). By using a folio we avoid a call to compound_head(). This shortens the function from 39 bytes to 25 and removes 4 instructions on x86-64. Because we still have unlock_page(), it's a net increase of 24 bytes of text for the kernel as a whole, but any path that uses unlock_folio() will execute 4 fewer instructions. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 27 ++++++++++----------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 90e970f48039..c211868086e0 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -698,7 +698,8 @@ extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); -extern void unlock_page(struct page *page); +void unlock_page(struct page *page); +void unlock_folio(struct folio *folio); void unlock_page_private_2(struct page *page); /* diff --git a/mm/filemap.c b/mm/filemap.c index eeeb8e2cc36a..47ac8126a12e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1435,29 +1435,22 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem #endif /** - * unlock_page - unlock a locked page - * @page: the page + * unlock_folio - Unlock a locked folio. + * @folio: The folio. * - * Unlocks the page and wakes up sleepers in wait_on_page_locked(). - * Also wakes sleepers in wait_on_page_writeback() because the wakeup - * mechanism between PageLocked pages and PageWriteback pages is shared. - * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep. + * Unlocks the folio and wakes up any thread sleeping on the page lock. * - * Note that this depends on PG_waiters being the sign bit in the byte - * that contains PG_locked - thus the BUILD_BUG_ON(). That allows us to - * clear the PG_locked bit and test PG_waiters at the same time fairly - * portably (architectures that do LL/SC can test any bit, while x86 can - * test the sign bit). + * Context: May be called from interrupt or process context. May not be + * called from NMI context. */ -void unlock_page(struct page *page) +void unlock_folio(struct folio *folio) { BUILD_BUG_ON(PG_waiters != 7); - page = compound_head(page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (clear_bit_unlock_is_negative_byte(PG_locked, &page->flags)) - wake_up_page_bit(page, PG_locked); + VM_BUG_ON_FOLIO(!FolioLocked(folio), folio); + if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) + wake_up_page_bit(&folio->page, PG_locked); } -EXPORT_SYMBOL(unlock_page); +EXPORT_SYMBOL(unlock_folio); /** * unlock_page_private_2 - Unlock a page that's locked with PG_private_2 diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 5e107aa30a62..02798abf19a1 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -11,3 +11,9 @@ struct address_space *page_mapping(struct page *page) return folio_mapping(page_folio(page)); } EXPORT_SYMBOL(page_mapping); + +void unlock_page(struct page *page) +{ + return unlock_folio(page_folio(page)); +} +EXPORT_SYMBOL(unlock_page); From patchwork Sat Mar 20 05:40:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BBC8C433DB for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44B3261999 for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbhCTFni (ORCPT ); Sat, 20 Mar 2021 01:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230217AbhCTFnY (ORCPT ); Sat, 20 Mar 2021 01:43:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 812E8C061762; Fri, 19 Mar 2021 22:43:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4Y4aMF+8jf31XgvRHkLonra6ZVWq03ixGei9ymmXZRo=; b=ZvY5py3gh/LXbL9qrR31IkJ6qQ NN1zLehOFKScrydWvkWQrTE8CFCQwBLBZVv2jKAAg/vMn8WrGXAfhDj+CLPWZGShPKW/dIKdn5XSW 0TvWBHTR/+HXorEofXnmmW9HWUvxGo3CZ2R0LbEujhsufthWsSvpmsO1PxY4pmF7mwnpLf3hiTz2x Ox3FiVxB7avQPUYw4HMq3MxxbTO5gW1kDdhuQz3dfwwB929dASf7a9yFgVBHcZEPI4QyRdYGtey7v fcla+V00woGGKhkFHS7BUxcJAA6hSyx9eCFOYF6TBsQTpL37fuJ14O2v47HHQjxP8y7fLd63SxvM/ aa50DETg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUNt-005SbL-UX; Sat, 20 Mar 2021 05:42:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 16/27] mm/filemap: Add lock_folio Date: Sat, 20 Mar 2021 05:40:53 +0000 Message-Id: <20210320054104.1300774-17-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is like lock_page() but for use by callers who know they have a folio. Convert __lock_page() to be __lock_folio(). This saves one call to compound_head() per contended call to lock_page(). Saves 362 bytes of text; mostly from improved register allocation and inlining decisions. __lock_folio is 59 bytes while __lock_page was 79. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 24 +++++++++++++++++++----- mm/filemap.c | 29 +++++++++++++++-------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c211868086e0..c96ba0dfe111 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -693,7 +693,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, return true; } -extern void __lock_page(struct page *page); +void __lock_folio(struct folio *folio); extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, @@ -702,13 +702,24 @@ void unlock_page(struct page *page); void unlock_folio(struct folio *folio); void unlock_page_private_2(struct page *page); +static inline bool trylock_folio(struct folio *folio) +{ + return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); +} + /* * Return true if the page was successfully locked */ static inline int trylock_page(struct page *page) { - page = compound_head(page); - return (likely(!test_and_set_bit_lock(PG_locked, &page->flags))); + return trylock_folio(page_folio(page)); +} + +static inline void lock_folio(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + __lock_folio(folio); } /* @@ -716,9 +727,12 @@ static inline int trylock_page(struct page *page) */ static inline void lock_page(struct page *page) { + struct folio *folio; might_sleep(); - if (!trylock_page(page)) - __lock_page(page); + + folio = page_folio(page); + if (!trylock_folio(folio)) + __lock_folio(folio); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 47ac8126a12e..99c05e2c0eea 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1187,7 +1187,7 @@ static void wake_up_page(struct page *page, int bit) */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like - * __lock_page() waiting on then setting PG_locked. + * __lock_folio() waiting on then setting PG_locked. */ SHARED, /* Hold ref to page and check the bit when woken, like * wait_on_page_writeback() waiting on PG_writeback. @@ -1535,17 +1535,16 @@ void page_endio(struct page *page, bool is_write, int err) EXPORT_SYMBOL_GPL(page_endio); /** - * __lock_page - get a lock on the page, assuming we need to sleep to get it - * @__page: the page to lock + * __lock_folio - Get a lock on the folio, assuming we need to sleep to get it. + * @folio: The folio to lock */ -void __lock_page(struct page *__page) +void __lock_folio(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } -EXPORT_SYMBOL(__lock_page); +EXPORT_SYMBOL(__lock_folio); int __lock_page_killable(struct page *__page) { @@ -1620,10 +1619,10 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, return 0; } } else { - __lock_page(page); + __lock_folio(page_folio(page)); } - return 1; + return 1; } /** @@ -2767,7 +2766,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, struct file **fpin) { - if (trylock_page(page)) + struct folio *folio = page_folio(page); + + if (trylock_folio(folio)) return 1; /* @@ -2780,7 +2781,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(page)) { + if (__lock_page_killable(&folio->page)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals @@ -2792,11 +2793,11 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, return 0; } } else - __lock_page(page); + __lock_folio(folio); + return 1; } - /* * Synchronous readahead happens when we don't even find a page in the page * cache at all. We don't want to perform IO under the mmap sem, so if we have From patchwork Sat Mar 20 05:40:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93189C433C1 for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C8FB6199B for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230100AbhCTFnj (ORCPT ); Sat, 20 Mar 2021 01:43:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230218AbhCTFnZ (ORCPT ); Sat, 20 Mar 2021 01:43:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E05DAC061762; Fri, 19 Mar 2021 22:43:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=elZnjGaYDjPwKSSSm76I3MI9c9nGwtkyXulavWZGeYk=; b=lN6RH7vmCPaZl//sn5tbrC+dyP pX8/nLMZEHEUaWHOLw32NFvDtwPZZrl7Hirc0WrlzxC9Us3m9cZteCxcNpF9WryKHVJAdJ9QK49Ze E84TQeiAqTMpxQHSgTtuOJwyVs0/7/PBdjFyaTPLY3Azh+VX9xA69K/KVFbzkIzr+NTAM7kOeV0oc rHhFJutidRUkT30VMDrs2LKcrQbSmF2qFw6Me32vgBd1oe8X7HguX2uFcVwAgbQ+dW9GqmBvzViq6 zO55S3nlMh4j4KrYnT0pSAKD0cXVZVE1DdM/vdcgP8OMXYGlo2IuwccUQrouxaK1X4SlzXnUGyEbD mRtdlZVQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUO4-005ScR-HW; Sat, 20 Mar 2021 05:43:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 17/27] mm/filemap: Add lock_folio_killable Date: Sat, 20 Mar 2021 05:40:54 +0000 Message-Id: <20210320054104.1300774-18-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is like lock_page_killable() but for use by callers who know they have a folio. Convert __lock_page_killable() to be __lock_folio_killable(). This saves one call to compound_head() per contended call to lock_page_killable(). __lock_folio_killable() is 20 bytes smaller than __lock_page_killable() was. lock_page_maybe_drop_mmap() shrinks by 68 bytes and __lock_page_or_retry() shrinks by 66 bytes. That's a total of 154 bytes of text saved. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 15 ++++++++++----- mm/filemap.c | 17 +++++++++-------- 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c96ba0dfe111..aa7f564e5ecf 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -694,7 +694,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, } void __lock_folio(struct folio *folio); -extern int __lock_page_killable(struct page *page); +int __lock_folio_killable(struct folio *folio); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); @@ -735,6 +735,14 @@ static inline void lock_page(struct page *page) __lock_folio(folio); } +static inline int lock_folio_killable(struct folio *folio) +{ + might_sleep(); + if (!trylock_folio(folio)) + return __lock_folio_killable(folio); + return 0; +} + /* * lock_page_killable is like lock_page but can be interrupted by fatal * signals. It returns 0 if it locked the page and -EINTR if it was @@ -742,10 +750,7 @@ static inline void lock_page(struct page *page) */ static inline int lock_page_killable(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - return __lock_page_killable(page); - return 0; + return lock_folio_killable(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 99c05e2c0eea..7cac47db78a5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1546,14 +1546,13 @@ void __lock_folio(struct folio *folio) } EXPORT_SYMBOL(__lock_folio); -int __lock_page_killable(struct page *__page) +int __lock_folio_killable(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, EXCLUSIVE); } -EXPORT_SYMBOL_GPL(__lock_page_killable); +EXPORT_SYMBOL_GPL(__lock_folio_killable); int __lock_page_async(struct page *page, struct wait_page_queue *wait) { @@ -1595,6 +1594,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait) int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio = page_folio(page); + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released @@ -1613,13 +1614,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, if (flags & FAULT_FLAG_KILLABLE) { int ret; - ret = __lock_page_killable(page); + ret = __lock_folio_killable(folio); if (ret) { mmap_read_unlock(mm); return 0; } } else { - __lock_folio(page_folio(page)); + __lock_folio(folio); } return 1; @@ -2781,7 +2782,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(&folio->page)) { + if (__lock_folio_killable(folio)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals From patchwork Sat Mar 20 05:40:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95180C433E1 for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BF0F6199C for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230107AbhCTFnj (ORCPT ); Sat, 20 Mar 2021 01:43:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230219AbhCTFn1 (ORCPT ); Sat, 20 Mar 2021 01:43:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38E63C061762; Fri, 19 Mar 2021 22:43:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fUOqF+m0UOFmZfY0NeH5xskC2lEFZxO68WMcUyGRPhg=; b=EgMmKFgDmYscwvdDutIm/PsLr6 hVo+HCOEtjVhaHn0/p0naVyr9Q/vCpu+QFDDjHTieD+PaVdN/GX4vx+VhPKIOQ0Ef0UIfdEvawH/T e6muUaJoSrPowvVVOPKxBLqBaM1gD1uA5Rd9wtnQ1rzo9GBa9QdFnbD1PU0VbbIMY2nB5ehcZYl3Q Vmjr0Pg+bj/DTSqkzgWMM9ytl4Ls9DCNE4yoXGPjXBJ9FY4yJC8qRz5ELmW84X4Yk9PpoYzqNXAI5 Z/lSl/uwtJE5lqdwMxXfr5BDfaTObxUYg0vX+SVAdjIDHXJh1KAsbuTzsKkjOBoK4zFczwo2AJ1MZ BrjST5zA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOD-005Sde-8I; Sat, 20 Mar 2021 05:43:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 18/27] mm/filemap: Add __lock_folio_async Date: Sat, 20 Mar 2021 05:40:55 +0000 Message-Id: <20210320054104.1300774-19-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org There aren't any actual callers of lock_page_async(), so remove it. Convert filemap_update_page() to call __lock_folio_async(). __lock_folio_async() is 21 bytes smaller than __lock_page_async(), but the real savings come from using a folio in filemap_update_page(), shrinking it from 514 bytes to 403 bytes, saving 111 bytes. The text shrinks by 132 bytes in total. Signed-off-by: Matthew Wilcox (Oracle) --- fs/io_uring.c | 2 +- include/linux/pagemap.h | 17 ----------------- mm/filemap.c | 31 ++++++++++++++++--------------- 3 files changed, 17 insertions(+), 33 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index b882bc4c5af7..ad0dc9afd194 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3201,7 +3201,7 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } /* - * This is our waitqueue callback handler, registered through lock_page_async() + * This is our waitqueue callback handler, registered through lock_folio_async() * when we initially tried to do the IO with the iocb armed our waitqueue. * This gets called when the page is unlocked, and we generally expect that to * happen when the page IO is completed and the page is now uptodate. This will diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index aa7f564e5ecf..3cd1b5e28593 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -695,7 +695,6 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __lock_folio(struct folio *folio); int __lock_folio_killable(struct folio *folio); -extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); void unlock_page(struct page *page); @@ -753,22 +752,6 @@ static inline int lock_page_killable(struct page *page) return lock_folio_killable(page_folio(page)); } -/* - * lock_page_async - Lock the page, unless this would block. If the page - * is already locked, then queue a callback when the page becomes unlocked. - * This callback can then retry the operation. - * - * Returns 0 if the page is locked successfully, or -EIOCBQUEUED if the page - * was already locked and the callback defined in 'wait' was queued. - */ -static inline int lock_page_async(struct page *page, - struct wait_page_queue *wait) -{ - if (!trylock_page(page)) - return __lock_page_async(page, wait); - return 0; -} - /* * lock_page_or_retry - Lock the page, unless this would block and the * caller indicated that it can handle a retry. diff --git a/mm/filemap.c b/mm/filemap.c index 7cac47db78a5..12dc672adc2e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1554,18 +1554,18 @@ int __lock_folio_killable(struct folio *folio) } EXPORT_SYMBOL_GPL(__lock_folio_killable); -int __lock_page_async(struct page *page, struct wait_page_queue *wait) +static int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) { - struct wait_queue_head *q = page_waitqueue(page); + struct wait_queue_head *q = page_waitqueue(&folio->page); int ret = 0; - wait->page = page; + wait->page = &folio->page; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); __add_wait_queue_entry_tail(q, &wait->wait); - SetPageWaiters(page); - ret = !trylock_page(page); + SetFolioWaiters(folio); + ret = !trylock_folio(folio); /* * If we were successful now, we know we're still on the * waitqueue as we're still under the lock. This means it's @@ -2312,41 +2312,42 @@ static int filemap_update_page(struct kiocb *iocb, struct address_space *mapping, struct iov_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); int error; - if (!trylock_page(page)) { + if (!trylock_folio(folio)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO)) return -EAGAIN; if (!(iocb->ki_flags & IOCB_WAITQ)) { - put_and_wait_on_page_locked(page, TASK_KILLABLE); + put_and_wait_on_page_locked(&folio->page, TASK_KILLABLE); return AOP_TRUNCATED_PAGE; } - error = __lock_page_async(page, iocb->ki_waitq); + error = __lock_folio_async(folio, iocb->ki_waitq); if (error) return error; } - if (!page->mapping) + if (!folio->page.mapping) goto truncated; error = 0; - if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page)) + if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, &folio->page)) goto unlock; error = -EAGAIN; if (iocb->ki_flags & (IOCB_NOIO | IOCB_NOWAIT | IOCB_WAITQ)) goto unlock; - error = filemap_read_page(iocb->ki_filp, mapping, page); + error = filemap_read_page(iocb->ki_filp, mapping, &folio->page); if (error == AOP_TRUNCATED_PAGE) - put_page(page); + put_folio(folio); return error; truncated: - unlock_page(page); - put_page(page); + unlock_folio(folio); + put_folio(folio); return AOP_TRUNCATED_PAGE; unlock: - unlock_page(page); + unlock_folio(folio); return error; } From patchwork Sat Mar 20 05:40:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD598C433E2 for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7B0A16199E for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbhCTFnj (ORCPT ); Sat, 20 Mar 2021 01:43:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230224AbhCTFne (ORCPT ); Sat, 20 Mar 2021 01:43:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AFBAC061762; Fri, 19 Mar 2021 22:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=p8Z3SG+7sSppmZrkCtMgIpMAxffrWUao7nK9dOhrrxw=; b=pl6iJr1KYFqAQu1zSUU8Iw5uck SIT+RYVXf02CZ/wxBMsytTmj45WxJkAXU/sKL52mbNVEert6aaSbBnjCavT3ZLBTw5r0laXGFJkwN UUb11YtJ5dQst7OVs2bnceCBOVM+WqjTtREEdp1IPK3PnUaCT34cjQZ1IHloqpISsq/KK3EjPfEJe GBxXd+M5Uye7bNPHuNZAweNRbbmNRtncjlqfhSm/4iVz7uH3tSKC8TuwrZP/NDmNTB1kpSaJGEOGS 4diUnc5ARnlYL24WVwQFOUBGHnKhdrNaT/Recg9fyY7n6F4dVtbjOIjme2oV0s5wCettnkUZkQOFW BrYDHJMA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUON-005SeL-JX; Sat, 20 Mar 2021 05:43:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 19/27] mm/filemap: Add __lock_folio_or_retry Date: Sat, 20 Mar 2021 05:40:56 +0000 Message-Id: <20210320054104.1300774-20-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Convert __lock_page_or_retry() to __lock_folio_or_retry(). This actually saves 4 bytes in the only caller of lock_page_or_retry() (due to better register allocation) and saves the 20 byte cost of calling page_folio() in __lock_folio_or_retry() for a total saving of 24 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 9 ++++++--- mm/filemap.c | 10 ++++------ mm/memory.c | 8 ++++---- 3 files changed, 14 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 3cd1b5e28593..38f4ee28a3a5 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -695,7 +695,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __lock_folio(struct folio *folio); int __lock_folio_killable(struct folio *folio); -extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +int __lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags); void unlock_page(struct page *page); void unlock_folio(struct folio *folio); @@ -757,13 +757,16 @@ static inline int lock_page_killable(struct page *page) * caller indicated that it can handle a retry. * * Return value and mmap_lock implications depend on flags; see - * __lock_page_or_retry(). + * __lock_folio_or_retry(). */ static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio; might_sleep(); - return trylock_page(page) || __lock_page_or_retry(page, mm, flags); + + folio = page_folio(page); + return trylock_folio(folio) || __lock_folio_or_retry(folio, mm, flags); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 12dc672adc2e..35e16db2e2be 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1582,20 +1582,18 @@ static int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * 1 - page is locked; mmap_lock is still held. - * 0 - page is not locked. + * 1 - folio is locked; mmap_lock is still held. + * 0 - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 1 - * with the page locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock unperturbed. */ -int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +int __lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags) { - struct folio *folio = page_folio(page); - if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released diff --git a/mm/memory.c b/mm/memory.c index d3273bd69dbb..9c3554972e2d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4056,7 +4056,7 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults). * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). * If mmap_lock is released, vma may become invalid (for example * by other thread calling munmap()). */ @@ -4288,7 +4288,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) * concurrent faults). * * The mmap_lock may have been released depending on flags and our return value. - * See filemap_fault() and __lock_page_or_retry(). + * See filemap_fault() and __lock_folio_or_retry(). */ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) { @@ -4392,7 +4392,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). */ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags) @@ -4548,7 +4548,7 @@ static inline void mm_account_fault(struct pt_regs *regs, * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __lock_folio_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs) From patchwork Sat Mar 20 05:40:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02DE0C433E0 for ; Sat, 20 Mar 2021 05:45:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E07D76199E for ; Sat, 20 Mar 2021 05:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230196AbhCTFoJ (ORCPT ); Sat, 20 Mar 2021 01:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbhCTFnn (ORCPT ); Sat, 20 Mar 2021 01:43:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4A36C061762; Fri, 19 Mar 2021 22:43:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MGeBPoR/tv5xpwOSIW2JVU9MNvSKrQK5MirrXuh+2jU=; b=EGzQIfwZFtmuVcJ8+b8T0Gzts2 vNfmRm/XvfK3p4jOSRzdNu9bZgr1x6aKV61QuaFLr9fzeW3z43T5kr12li9ie7RqE12mpmacsid8S JyYpfY20Ls0G/GlipJBBIgYBXegIbuodJTOZTDg/GsglHN3auXqIs3ad/6CBwkRcodtrVvZkaVxA8 p0qoNsWYz9lrOv9RVKFa3Rpn5lDiQ53NZUwNxZs7zb6Yu/JcCpIGtMu4oNMpbGjLvzlZl1JvQfBDO NeUYJueg7sfzDn/kb6ZSDvx0rfYr4Q3zx+oxE+9RUu0iV5OjmJQRea57ugbwDwb+dMWbl+YAwIztZ xwyYkZsA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOT-005Sen-SX; Sat, 20 Mar 2021 05:43:26 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 20/27] mm/filemap: Add wait_on_folio_locked Date: Sat, 20 Mar 2021 05:40:57 +0000 Message-Id: <20210320054104.1300774-21-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Also add wait_on_folio_locked_killable(). Turn wait_on_page_locked() and wait_on_page_locked_killable() into wrappers. This eliminates a call to compound_head() from each call-site, reducing text size by 200 bytes for me. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 26 ++++++++++++++++++-------- mm/filemap.c | 4 ++-- 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 38f4ee28a3a5..a8e19e4e0b09 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -777,23 +777,33 @@ extern void wait_on_page_bit(struct page *page, int bit_nr); extern int wait_on_page_bit_killable(struct page *page, int bit_nr); /* - * Wait for a page to be unlocked. + * Wait for a folio to be unlocked. * - * This must be called with the caller "holding" the page, - * ie with increased "page->count" so that the page won't + * This must be called with the caller "holding" the folio, + * ie with increased "page->count" so that the folio won't * go away during the wait.. */ +static inline void wait_on_folio_locked(struct folio *folio) +{ + if (FolioLocked(folio)) + wait_on_page_bit(&folio->page, PG_locked); +} + +static inline int wait_on_folio_locked_killable(struct folio *folio) +{ + if (!FolioLocked(folio)) + return 0; + return wait_on_page_bit_killable(&folio->page, PG_locked); +} + static inline void wait_on_page_locked(struct page *page) { - if (PageLocked(page)) - wait_on_page_bit(compound_head(page), PG_locked); + wait_on_folio_locked(page_folio(page)); } static inline int wait_on_page_locked_killable(struct page *page) { - if (!PageLocked(page)) - return 0; - return wait_on_page_bit_killable(compound_head(page), PG_locked); + return wait_on_folio_locked_killable(page_folio(page)); } int put_and_wait_on_page_locked(struct page *page, int state); diff --git a/mm/filemap.c b/mm/filemap.c index 35e16db2e2be..99758045ec2d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1604,9 +1604,9 @@ int __lock_folio_or_retry(struct folio *folio, struct mm_struct *mm, mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) - wait_on_page_locked_killable(page); + wait_on_folio_locked_killable(folio); else - wait_on_page_locked(page); + wait_on_folio_locked(folio); return 0; } if (flags & FAULT_FLAG_KILLABLE) { From patchwork Sat Mar 20 05:40:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23504C433EA for ; Sat, 20 Mar 2021 05:45:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C62C61999 for ; Sat, 20 Mar 2021 05:45:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230205AbhCTFoK (ORCPT ); Sat, 20 Mar 2021 01:44:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230187AbhCTFnx (ORCPT ); Sat, 20 Mar 2021 01:43:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 830FAC061762; Fri, 19 Mar 2021 22:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mdk9ehoqihBuT1oolJ8pbojaRvEw95snsUBSZv8+qPM=; b=Kgnyt8iYymxgDiEKmqyHBxaZjt loC2gKeif3QUecjuepsKqk/h1wEuNXQNeIZXLuvl+RyfhK+1jk2snr2wtpq9EwZ8Iy9iz4yDHYwQl JeiicGRjn233zRg6LBgcqRhSwWxIUn9hB62ViHxWNrttcQUMN32J6Zwhl6qlQ4l3uO/tmYl9fz5BA RVy9QMS9mWhD/p8X3uut4QHKdEiSu575Sn0hQZBzZ0I441DNnnNcV7YbK4n8y14ApgrMfy9H5ixln 7zYWUlDiUqVl/9okCI8/hsi9CFZIsFQO86FpUa+d1h6LYEx90Xl2JuK5MQVJ/pbX+PZmbBAMrjICZ phlGINfg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOZ-005SfS-Ug; Sat, 20 Mar 2021 05:43:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 21/27] mm/filemap: Add end_folio_writeback Date: Sat, 20 Mar 2021 05:40:58 +0000 Message-Id: <20210320054104.1300774-22-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add an end_page_writeback() wrapper function for users that are not yet converted to folios. end_folio_writeback() is less than half the size of end_page_writeback() at just 105 bytes compared to 213 bytes, due to removing all the compound_head() calls. The 30 byte wrapper function makes this a net saving of 70 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 38 +++++++++++++++++++------------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a8e19e4e0b09..2ee6b1b9561c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -809,7 +809,8 @@ static inline int wait_on_page_locked_killable(struct page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); int wait_on_page_writeback_killable(struct page *page); -extern void end_page_writeback(struct page *page); +void end_page_writeback(struct page *page); +void end_folio_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/filemap.c b/mm/filemap.c index 99758045ec2d..dc7deb8c36ee 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1175,11 +1175,11 @@ static void wake_up_page_bit(struct page *page, int bit_nr) spin_unlock_irqrestore(&q->lock, flags); } -static void wake_up_page(struct page *page, int bit) +static void wake_up_folio(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!FolioWaiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } /* @@ -1473,38 +1473,38 @@ void unlock_page_private_2(struct page *page) EXPORT_SYMBOL(unlock_page_private_2); /** - * end_page_writeback - end writeback against a page - * @page: the page + * end_folio_writeback - End writeback against a folio. + * @folio: The folio. */ -void end_page_writeback(struct page *page) +void end_folio_writeback(struct folio *folio) { /* * TestClearPageReclaim could be used here but it is an atomic * operation and overkill in this particular case. Failing to - * shuffle a page marked for immediate reclaim is too mild to + * shuffle a folio marked for immediate reclaim is too mild to * justify taking an atomic operation penalty at the end of - * ever page writeback. + * every folio writeback. */ - if (PageReclaim(page)) { - ClearPageReclaim(page); - rotate_reclaimable_page(page); + if (FolioReclaim(folio)) { + ClearFolioReclaim(folio); + rotate_reclaimable_page(&folio->page); } /* - * Writeback does not hold a page reference of its own, relying + * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. - * But here we must make sure that the page is not freed and - * reused before the wake_up_page(). + * But here we must make sure that the folio is not freed and + * reused before the wake_up_folio(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + get_folio(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + wake_up_folio(folio, PG_writeback); + put_folio(folio); } -EXPORT_SYMBOL(end_page_writeback); +EXPORT_SYMBOL(end_folio_writeback); /* * After completing I/O on a page, call this routine to update the page diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 02798abf19a1..d1a1dfe52589 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -17,3 +17,9 @@ void unlock_page(struct page *page) return unlock_folio(page_folio(page)); } EXPORT_SYMBOL(unlock_page); + +void end_page_writeback(struct page *page) +{ + return end_folio_writeback(page_folio(page)); +} +EXPORT_SYMBOL(end_page_writeback); From patchwork Sat Mar 20 05:40:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3432BC433E9 for ; Sat, 20 Mar 2021 05:45:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CCBA6199C for ; Sat, 20 Mar 2021 05:45:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbhCTFoL (ORCPT ); Sat, 20 Mar 2021 01:44:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230192AbhCTFoG (ORCPT ); Sat, 20 Mar 2021 01:44:06 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 654F6C061762; Fri, 19 Mar 2021 22:44:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fakHqo/hsd0J2JO7HEvfhjfuv+gkqjiek9RT8hdv1ww=; b=orTqE2KcP3Z/LhF6yW0EWgYFa7 F7MGLCGUI0rsuAn9L9EoYF3NqnPMicaENEVmwgeuEzBGAQB03ZVCUqStOLtco4VyhQ/vgK2aJM3UX 3agGU7jQFCoBIYbK5+M1FsfqjlppxW5qNDumBHxdxZaXaI/9+pqrOrQ1rQ/pDSQV1z8aDINQUL3mf 7XZDz0NgSi5Z7DQJDmrgODZwObNKqxdxJbVcMxnF5upF9BjLkx659F6Aq2nooNAHJ+PY2whsWoEEr TGWBiFP6SHHLgyyAv/sa+UF0pBFLIbWLHsimVE+Sdws5zrvUg5MkVId9Tz9aezTyq+59KLtwUdUBM w1G+bxNQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOi-005Sgx-3Z; Sat, 20 Mar 2021 05:43:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 22/27] mm/writeback: Add wait_on_folio_writeback Date: Sat, 20 Mar 2021 05:40:59 +0000 Message-Id: <20210320054104.1300774-23-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org wait_on_page_writeback_killable() only has one caller, so convert it to call wait_on_folio_writeback_killable(). For the wait_on_page_writeback() callers, add a compatibility wrapper around wait_on_folio_writeback(). Turning PageWriteback() into FolioWriteback() eliminates a call to compound_head() which saves 8 bytes and 15 bytes in the two functions. That is more than offset by adding the wait_on_page_writeback compatibility wrapper for a net increase in text of 15 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- fs/afs/write.c | 2 +- include/linux/pagemap.h | 3 ++- mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 43 +++++++++++++++++++++++++++-------------- 4 files changed, 38 insertions(+), 16 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 106a864b6a93..4b70b0e7fcfa 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -850,7 +850,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) return VM_FAULT_RETRY; #endif - if (wait_on_page_writeback_killable(page)) + if (wait_on_folio_writeback_killable(page_folio(page))) return VM_FAULT_RETRY; if (lock_page_killable(page) < 0) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2ee6b1b9561c..a6adf69ea5c5 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -808,7 +808,8 @@ static inline int wait_on_page_locked_killable(struct page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); -int wait_on_page_writeback_killable(struct page *page); +void wait_on_folio_writeback(struct folio *folio); +int wait_on_folio_writeback_killable(struct folio *folio); void end_page_writeback(struct page *page); void end_folio_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index d1a1dfe52589..6aadecc39fba 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -23,3 +23,9 @@ void end_page_writeback(struct page *page) return end_folio_writeback(page_folio(page)); } EXPORT_SYMBOL(end_page_writeback); + +void wait_on_page_writeback(struct page *page) +{ + return wait_on_folio_writeback(page_folio(page)); +} +EXPORT_SYMBOL_GPL(wait_on_page_writeback); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 5e761fb62800..a08e77abcf12 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2818,33 +2818,48 @@ int __test_set_page_writeback(struct page *page, bool keep_write) } EXPORT_SYMBOL(__test_set_page_writeback); -/* - * Wait for a page to complete writeback +/** + * wait_on_folio_writeback - Wait for a folio to complete writeback. + * @folio: The folio to wait for. + * + * If the folio is currently being written back to storage, wait for the + * I/O to complete. + * + * Context: Sleeps; must be called in process context and with no spinlocks + * held. */ -void wait_on_page_writeback(struct page *page) +void wait_on_folio_writeback(struct folio *folio) { - while (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - wait_on_page_bit(page, PG_writeback); + while (FolioWriteback(folio)) { + trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + wait_on_page_bit(&folio->page, PG_writeback); } } -EXPORT_SYMBOL_GPL(wait_on_page_writeback); +EXPORT_SYMBOL_GPL(wait_on_folio_writeback); -/* - * Wait for a page to complete writeback. Returns -EINTR if we get a +/** + * wait_on_folio_writeback_killable - Wait for a folio to complete writeback. + * @folio: The folio to wait for. + * + * If the folio is currently being written back to storage, wait for the + * I/O to complete or a fatal signal to arrive. + * + * Context: Sleeps; must be called in process context and with no spinlocks + * held. + * Return: 0 if the folio has completed writeback. -EINTR if we get a * fatal signal while waiting. */ -int wait_on_page_writeback_killable(struct page *page) +int wait_on_folio_writeback_killable(struct folio *folio) { - while (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - if (wait_on_page_bit_killable(page, PG_writeback)) + while (FolioWriteback(folio)) { + trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + if (wait_on_page_bit_killable(&folio->page, PG_writeback)) return -EINTR; } return 0; } -EXPORT_SYMBOL_GPL(wait_on_page_writeback_killable); +EXPORT_SYMBOL_GPL(wait_on_folio_writeback_killable); /** * wait_for_stable_page() - wait for writeback to finish, if necessary. From patchwork Sat Mar 20 05:41:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BED07C433E0 for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9CF2E6196E for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229484AbhCTFpN (ORCPT ); Sat, 20 Mar 2021 01:45:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230243AbhCTFov (ORCPT ); Sat, 20 Mar 2021 01:44:51 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B85D6C061762; Fri, 19 Mar 2021 22:44:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=30IifAMV+9SjIPC06TX4qxHTMbVci0D9vhNY5D2kHrs=; b=czb/YXKSnUYK8E7i9dfNKYPFCQ Bo0w/B31yKWGWKEfLgYRqGJcpslePdCRzM65+LobyOnxm878IqJ5aP4kqtz2Mz6swLcXknMWn1CXK V2TTPSLV689am4Ks9zoQ3Gc9KB4t97OKCFL7I935srf45kKomdM8nCDH4PR/fHMbUlTViy3Z+CM7F TMQ27Xf4eA2dLOHFz2PmxsK4tFhngMbUpAkMiZB8NpJpALtBQgqNZ3EXyGy2DpPiLdLKDMG47SRKt ZUjVBCC/7GXz+A3V4sPZnUJa/I4A8akTZHWVFumrUCHgzZ8EcT8rpD09jLjctmXiXWonmqZXvZwNf 6yyPZ17Q==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOu-005Shq-7j; Sat, 20 Mar 2021 05:43:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 23/27] mm/writeback: Add wait_for_stable_folio Date: Sat, 20 Mar 2021 05:41:00 +0000 Message-Id: <20210320054104.1300774-24-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move wait_for_stable_page() into the folio compatibility file. wait_for_stable_folio() avoids a call to compound_head() and is 14 bytes smaller than wait_for_stable_page() was. The net text size grows by 24 bytes as a result of this patch. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 1 + mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 17 ++++++++--------- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a6adf69ea5c5..c92782b77d98 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -813,6 +813,7 @@ int wait_on_folio_writeback_killable(struct folio *folio); void end_page_writeback(struct page *page); void end_folio_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); +void wait_for_stable_folio(struct folio *folio); void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 6aadecc39fba..335594fe414e 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -29,3 +29,9 @@ void wait_on_page_writeback(struct page *page) return wait_on_folio_writeback(page_folio(page)); } EXPORT_SYMBOL_GPL(wait_on_page_writeback); + +void wait_for_stable_page(struct page *page) +{ + return wait_for_stable_folio(page_folio(page)); +} +EXPORT_SYMBOL_GPL(wait_for_stable_page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index a08e77abcf12..c222f88cf06b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2862,17 +2862,16 @@ int wait_on_folio_writeback_killable(struct folio *folio) EXPORT_SYMBOL_GPL(wait_on_folio_writeback_killable); /** - * wait_for_stable_page() - wait for writeback to finish, if necessary. - * @page: The page to wait on. + * wait_for_stable_folio() - wait for writeback to finish, if necessary. + * @folio: The folio to wait on. * - * This function determines if the given page is related to a backing device - * that requires page contents to be held stable during writeback. If so, then + * This function determines if the given folio is related to a backing device + * that requires folio contents to be held stable during writeback. If so, then * it will wait for any pending writeback to complete. */ -void wait_for_stable_page(struct page *page) +void wait_for_stable_folio(struct folio *folio) { - page = thp_head(page); - if (page->mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) - wait_on_page_writeback(page); + if (folio->page.mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) + wait_on_folio_writeback(folio); } -EXPORT_SYMBOL_GPL(wait_for_stable_page); +EXPORT_SYMBOL_GPL(wait_for_stable_folio); From patchwork Sat Mar 20 05:41:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CED95C433DB for ; Sat, 20 Mar 2021 05:45:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A539E6196E for ; Sat, 20 Mar 2021 05:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbhCTFom (ORCPT ); Sat, 20 Mar 2021 01:44:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230215AbhCTFoL (ORCPT ); Sat, 20 Mar 2021 01:44:11 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB8B6C061762; Fri, 19 Mar 2021 22:44:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mv3u4+W5CsTHjTnRT19rpQBHONKAxhICOPPwUGPWgz0=; b=k8HKJ+bXo8FFC6MmbZAxCRlcCw mQDiUTWM2T2LYJAvdJPVsVFpL5Tfm7XZFeEbxEk4DvF9h2OrBJ3OstxdIdwdRpR8TcQn3IqRI1GxT 49+gfKc3WSISQXzn/ITe7RgnDwVmOepqLA9PCM/V9Hm38Chi8CaWqX6BhfTKIsnlMf66RQfLMfZlf SnZfsCBZH4xYoNucoX42EMViiM00BqKnG+sJwcbTf3ksq0sArFzJUe1+Vmy27/BnRgX2yCddb9cB+ 7DnQ9uXVrslBcXjxysITqERQnwEYmNUKwdtYX+y+4axaJ6XHkJYUmkeL4cirdWXFwErjrN6JGGWWn Y9DDQdEA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUP1-005Sin-Nd; Sat, 20 Mar 2021 05:44:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 24/27] mm/filemap: Convert wait_on_page_bit to wait_on_folio_bit Date: Sat, 20 Mar 2021 05:41:01 +0000 Message-Id: <20210320054104.1300774-25-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We must always wait on the folio, otherwise we won't be woken up. This commit shrinks the kernel by 691 bytes, mostly due to moving the page waitqueue lookup into wait_on_folio_bit_common(). Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot --- include/linux/netfs.h | 2 +- include/linux/pagemap.h | 10 ++++---- mm/filemap.c | 56 ++++++++++++++++++----------------------- mm/page-writeback.c | 4 +-- 4 files changed, 33 insertions(+), 39 deletions(-) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 9d3fbed4e30a..f44142dca767 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -54,7 +54,7 @@ static inline void unlock_page_fscache(struct page *page) static inline void wait_on_page_fscache(struct page *page) { if (PageFsCache(page)) - wait_on_page_bit(compound_head(page), PG_fscache); + wait_on_folio_bit(page_folio(page), PG_fscache); } enum netfs_read_source { diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c92782b77d98..7ddaabbd1ddb 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -770,11 +770,11 @@ static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm, } /* - * This is exported only for wait_on_page_locked/wait_on_page_writeback, etc., + * This is exported only for wait_on_folio_locked/wait_on_folio_writeback, etc., * and should not be used directly. */ -extern void wait_on_page_bit(struct page *page, int bit_nr); -extern int wait_on_page_bit_killable(struct page *page, int bit_nr); +extern void wait_on_folio_bit(struct folio *folio, int bit_nr); +extern int wait_on_folio_bit_killable(struct folio *folio, int bit_nr); /* * Wait for a folio to be unlocked. @@ -786,14 +786,14 @@ extern int wait_on_page_bit_killable(struct page *page, int bit_nr); static inline void wait_on_folio_locked(struct folio *folio) { if (FolioLocked(folio)) - wait_on_page_bit(&folio->page, PG_locked); + wait_on_folio_bit(folio, PG_locked); } static inline int wait_on_folio_locked_killable(struct folio *folio) { if (!FolioLocked(folio)) return 0; - return wait_on_page_bit_killable(&folio->page, PG_locked); + return wait_on_folio_bit_killable(folio, PG_locked); } static inline void wait_on_page_locked(struct page *page) diff --git a/mm/filemap.c b/mm/filemap.c index dc7deb8c36ee..f8746c149562 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1102,7 +1102,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, * * So update the flags atomically, and wake up the waiter * afterwards to avoid any races. This store-release pairs - * with the load-acquire in wait_on_page_bit_common(). + * with the load-acquire in wait_on_folio_bit_common(). */ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN); wake_up_state(wait->private, mode); @@ -1183,7 +1183,7 @@ static void wake_up_folio(struct folio *folio, int bit) } /* - * A choice of three behaviors for wait_on_page_bit_common(): + * A choice of three behaviors for wait_on_folio_bit_common(): */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like @@ -1217,9 +1217,10 @@ static inline bool trylock_page_bit_common(struct page *page, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; -static inline int wait_on_page_bit_common(wait_queue_head_t *q, - struct page *page, int bit_nr, int state, enum behavior behavior) +static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, + int state, enum behavior behavior) { + wait_queue_head_t *q = page_waitqueue(&folio->page); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1228,8 +1229,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, unsigned long pflags; if (bit_nr == PG_locked && - !PageUptodate(page) && PageWorkingset(page)) { - if (!PageSwapBacked(page)) { + !FolioUptodate(folio) && FolioWorkingset(folio)) { + if (!FolioSwapBacked(folio)) { delayacct_thrashing_start(); delayacct = true; } @@ -1239,7 +1240,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, init_wait(wait); wait->func = wake_page_function; - wait_page.page = page; + wait_page.page = &folio->page; wait_page.bit_nr = bit_nr; repeat: @@ -1254,7 +1255,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * Do one last check whether we can get the * page bit synchronously. * - * Do the SetPageWaiters() marking before that + * Do the SetFolioWaiters() marking before that * to let any waker we _just_ missed know they * need to wake us up (otherwise they'll never * even go to the slow case that looks at the @@ -1265,8 +1266,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * lock to avoid races. */ spin_lock_irq(&q->lock); - SetPageWaiters(page); - if (!trylock_page_bit_common(page, bit_nr, wait)) + SetFolioWaiters(folio); + if (!trylock_page_bit_common(&folio->page, bit_nr, wait)) __add_wait_queue_entry_tail(q, wait); spin_unlock_irq(&q->lock); @@ -1276,10 +1277,10 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * see whether the page bit testing has already * been done by the wake function. * - * We can drop our reference to the page. + * We can drop our reference to the folio. */ if (behavior == DROP) - put_page(page); + put_folio(folio); /* * Note that until the "finish_wait()", or until @@ -1316,7 +1317,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * * And if that fails, we'll have to retry this all. */ - if (unlikely(test_and_set_bit(bit_nr, &page->flags))) + if (unlikely(test_and_set_bit(bit_nr, folio_flags(folio, 0)))) goto repeat; wait->flags |= WQ_FLAG_DONE; @@ -1325,7 +1326,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, /* * If a signal happened, this 'finish_wait()' may remove the last - * waiter from the wait-queues, but the PageWaiters bit will remain + * waiter from the wait-queues, but the FolioWaiters bit will remain * set. That's ok. The next wakeup will take care of it, and trying * to do it here would be difficult and prone to races. */ @@ -1356,19 +1357,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } -void wait_on_page_bit(struct page *page, int bit_nr) +void wait_on_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); + wait_on_folio_bit_common(folio, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit); +EXPORT_SYMBOL(wait_on_folio_bit); -int wait_on_page_bit_killable(struct page *page, int bit_nr) +int wait_on_folio_bit_killable(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); + return wait_on_folio_bit_common(folio, bit_nr, TASK_KILLABLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit_killable); +EXPORT_SYMBOL(wait_on_folio_bit_killable); /** * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked @@ -1385,11 +1384,8 @@ EXPORT_SYMBOL(wait_on_page_bit_killable); */ int put_and_wait_on_page_locked(struct page *page, int state) { - wait_queue_head_t *q; - - page = compound_head(page); - q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, state, DROP); + return wait_on_folio_bit_common(page_folio(page), PG_locked, state, + DROP); } /** @@ -1540,16 +1536,14 @@ EXPORT_SYMBOL_GPL(page_endio); */ void __lock_folio(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_on_folio_bit_common(folio, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } EXPORT_SYMBOL(__lock_folio); int __lock_folio_killable(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, + return wait_on_folio_bit_common(folio, PG_locked, TASK_KILLABLE, EXCLUSIVE); } EXPORT_SYMBOL_GPL(__lock_folio_killable); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c222f88cf06b..b29737cd8049 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2832,7 +2832,7 @@ void wait_on_folio_writeback(struct folio *folio) { while (FolioWriteback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - wait_on_page_bit(&folio->page, PG_writeback); + wait_on_folio_bit(folio, PG_writeback); } } EXPORT_SYMBOL_GPL(wait_on_folio_writeback); @@ -2853,7 +2853,7 @@ int wait_on_folio_writeback_killable(struct folio *folio) { while (FolioWriteback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - if (wait_on_page_bit_killable(&folio->page, PG_writeback)) + if (wait_on_folio_bit_killable(folio, PG_writeback)) return -EINTR; } From patchwork Sat Mar 20 05:41:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B64CC433DB for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A7DB6196E for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230192AbhCTFpO (ORCPT ); Sat, 20 Mar 2021 01:45:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230252AbhCTFoy (ORCPT ); Sat, 20 Mar 2021 01:44:54 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0964C061762; Fri, 19 Mar 2021 22:44:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wGOrmW1WO8PLFeoyKHq4CM70Gdmu6Y9uruxKpwDZheo=; b=d6BkK24WNV2S6p2a/K54YZ9thk g/e93oCy6EW9ZhCGbMwtNDxd11lqt2vkf6CgQ2cFSb4BJHMYTPmj/ktIFXDYbGiYp8kd3CyejC31b kN1ECEdCsldtPY46OPpNtNrcAW922lK2GQix/NMR6liSctD57vBy7BvKOVNh+M8gCKdC/AAhwKMO7 K0dmnvpD/IqyWPFAShl1VSuZgYkRalzqk8tfnTnvDAntYqEL1KPU8MnVmAWzyIaDzFSfoK+sktIok EJaqMOaUikEdqHFQRB3idWmr1gSrNdrSQ+wpGJoiV45O+qF1Yezfaf44iCLwKeryD7JELKDmQFyaR aRMpOafQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUP7-005Sj5-3W; Sat, 20 Mar 2021 05:44:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 25/27] mm/filemap: Convert wake_up_page_bit to wake_up_folio_bit Date: Sat, 20 Mar 2021 05:41:02 +0000 Message-Id: <20210320054104.1300774-26-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All callers have a folio, so use it directly. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index f8746c149562..f5bacbe702ff 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1121,14 +1121,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, return (flags & WQ_FLAG_EXCLUSIVE) != 0; } -static void wake_up_page_bit(struct page *page, int bit_nr) +static void wake_up_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); + wait_queue_head_t *q = page_waitqueue(&folio->page); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = page; + key.page = &folio->page; key.bit_nr = bit_nr; key.page_match = 0; @@ -1163,7 +1163,7 @@ static void wake_up_page_bit(struct page *page, int bit_nr) * page waiters. */ if (!waitqueue_active(q) || !key.page_match) { - ClearPageWaiters(page); + ClearFolioWaiters(folio); /* * It's possible to miss clearing Waiters here, when we woke * our page waiters, but the hashed waitqueue has waiters for @@ -1179,7 +1179,7 @@ static void wake_up_folio(struct folio *folio, int bit) { if (!FolioWaiters(folio)) return; - wake_up_page_bit(&folio->page, bit); + wake_up_folio_bit(folio, bit); } /* @@ -1444,7 +1444,7 @@ void unlock_folio(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); VM_BUG_ON_FOLIO(!FolioLocked(folio), folio); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) - wake_up_page_bit(&folio->page, PG_locked); + wake_up_folio_bit(folio, PG_locked); } EXPORT_SYMBOL(unlock_folio); @@ -1461,10 +1461,10 @@ EXPORT_SYMBOL(unlock_folio); */ void unlock_page_private_2(struct page *page) { - page = compound_head(page); - VM_BUG_ON_PAGE(!PagePrivate2(page), page); - clear_bit_unlock(PG_private_2, &page->flags); - wake_up_page_bit(page, PG_private_2); + struct folio *folio = page_folio(page); + VM_BUG_ON_FOLIO(!FolioPrivate2(folio), folio); + clear_bit_unlock(PG_private_2, folio_flags(folio, 0)); + wake_up_folio_bit(folio, PG_private_2); } EXPORT_SYMBOL(unlock_page_private_2); From patchwork Sat Mar 20 05:41:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD0C4C433C1 for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ADA766199C for ; Sat, 20 Mar 2021 05:45:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230226AbhCTFpP (ORCPT ); Sat, 20 Mar 2021 01:45:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229564AbhCTFpK (ORCPT ); Sat, 20 Mar 2021 01:45:10 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B8AEC061762; Fri, 19 Mar 2021 22:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hKWhjtOavlZS+InhhyxyrlufMRhhG4vX5aFsT2ke98M=; b=u1y+t3ThBHWJ/yHjZRPeYs1dfl 2tvs86z3H5PDnekU1dEaO6jUxvXiahg1cSSEBTTpudbQkJrnuSlp/YoJxivw2dWPhhQHV7/0kaBhB oEEdSkzbjh2RsT1bzcS3I9XpH7rLQqU5VIMVZQdQo8GD4od6m9Rnho3GgXUcwGOuDjot3jyQNddc0 WZYAVGcFMgvAIBTGk4QljfuDPDvuxLuK8uNr66ZoXQ7F6FC5wEyI7QebpXZ7gYoiLA+J03vOZWJKx e+7O1RmqaTJCdaAUfv3nPIlMlZXbhrWvi7hugXfNWLSzoxvAwNu3KQJE1kkYS6OPBjJhyLpuJhVUg LSAlJyNg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUPB-005Sjb-90; Sat, 20 Mar 2021 05:44:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 26/27] mm/filemap: Convert page wait queues to be folios Date: Sat, 20 Mar 2021 05:41:03 +0000 Message-Id: <20210320054104.1300774-27-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot --- fs/cachefiles/rdwr.c | 16 ++++++++-------- include/linux/pagemap.h | 8 ++++---- mm/filemap.c | 33 +++++++++++++++++---------------- 3 files changed, 29 insertions(+), 28 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 8ffc40e84a59..ef50bd80ae74 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -25,20 +25,20 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode, struct cachefiles_object *object; struct fscache_retrieval *op = monitor->op; struct wait_page_key *key = _key; - struct page *page = wait->private; + struct folio *folio = wait->private; ASSERT(key); _enter("{%lu},%u,%d,{%p,%u}", monitor->netfs_page->index, mode, sync, - key->page, key->bit_nr); + key->folio, key->bit_nr); - if (key->page != page || key->bit_nr != PG_locked) + if (key->folio != folio || key->bit_nr != PG_locked) return 0; - _debug("--- monitor %p %lx ---", page, page->flags); + _debug("--- monitor %p %lx ---", folio, folio->page.flags); - if (!PageUptodate(page) && !PageError(page)) { + if (!FolioUptodate(folio) && !FolioError(folio)) { /* unlocked, not uptodate and not erronous? */ _debug("page probably truncated"); } @@ -107,7 +107,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object, put_page(backpage2); INIT_LIST_HEAD(&monitor->op_link); - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); if (trylock_page(backpage)) { ret = -EIO; @@ -294,7 +294,7 @@ static int cachefiles_read_backing_file_one(struct cachefiles_object *object, get_page(backpage); monitor->back_page = backpage; monitor->monitor.private = backpage; - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); monitor = NULL; /* but the page may have been read before the monitor was installed, so @@ -548,7 +548,7 @@ static int cachefiles_read_backing_file(struct cachefiles_object *object, get_page(backpage); monitor->back_page = backpage; monitor->monitor.private = backpage; - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); monitor = NULL; /* but the page may have been read before the monitor was diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7ddaabbd1ddb..78d865c2f2da 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -669,13 +669,13 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, } struct wait_page_key { - struct page *page; + struct folio *folio; int bit_nr; int page_match; }; struct wait_page_queue { - struct page *page; + struct folio *folio; int bit_nr; wait_queue_entry_t wait; }; @@ -683,7 +683,7 @@ struct wait_page_queue { static inline bool wake_page_match(struct wait_page_queue *wait_page, struct wait_page_key *key) { - if (wait_page->page != key->page) + if (wait_page->folio != key->folio) return false; key->page_match = 1; @@ -820,7 +820,7 @@ void page_endio(struct page *page, bool is_write, int err); /* * Add an arbitrary waiter to a page's wait queue */ -extern void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter); +void add_folio_wait_queue(struct folio *folio, wait_queue_entry_t *waiter); /* * Fault everything in given userspace address range in. diff --git a/mm/filemap.c b/mm/filemap.c index f5bacbe702ff..d9238d921009 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1019,11 +1019,11 @@ EXPORT_SYMBOL(__page_cache_alloc); */ #define PAGE_WAIT_TABLE_BITS 8 #define PAGE_WAIT_TABLE_SIZE (1 << PAGE_WAIT_TABLE_BITS) -static wait_queue_head_t page_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; +static wait_queue_head_t folio_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; -static wait_queue_head_t *page_waitqueue(struct page *page) +static wait_queue_head_t *folio_waitqueue(struct folio *folio) { - return &page_wait_table[hash_ptr(page, PAGE_WAIT_TABLE_BITS)]; + return &folio_wait_table[hash_ptr(folio, PAGE_WAIT_TABLE_BITS)]; } void __init pagecache_init(void) @@ -1031,7 +1031,7 @@ void __init pagecache_init(void) int i; for (i = 0; i < PAGE_WAIT_TABLE_SIZE; i++) - init_waitqueue_head(&page_wait_table[i]); + init_waitqueue_head(&folio_wait_table[i]); page_writeback_init(); } @@ -1086,10 +1086,11 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, */ flags = wait->flags; if (flags & WQ_FLAG_EXCLUSIVE) { - if (test_bit(key->bit_nr, &key->page->flags)) + if (test_bit(key->bit_nr, &key->folio->page.flags)) return -1; if (flags & WQ_FLAG_CUSTOM) { - if (test_and_set_bit(key->bit_nr, &key->page->flags)) + if (test_and_set_bit(key->bit_nr, + &key->folio->page.flags)) return -1; flags |= WQ_FLAG_DONE; } @@ -1123,12 +1124,12 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, static void wake_up_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = &folio->page; + key.folio = folio; key.bit_nr = bit_nr; key.page_match = 0; @@ -1220,7 +1221,7 @@ int sysctl_page_lock_unfairness = 5; static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1240,7 +1241,7 @@ static inline int wait_on_folio_bit_common(struct folio *folio, int bit_nr, init_wait(wait); wait->func = wake_page_function; - wait_page.page = &folio->page; + wait_page.folio = folio; wait_page.bit_nr = bit_nr; repeat: @@ -1395,17 +1396,17 @@ int put_and_wait_on_page_locked(struct page *page, int state) * * Add an arbitrary @waiter to the wait queue for the nominated @page. */ -void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter) +void add_folio_wait_queue(struct folio *folio, wait_queue_entry_t *waiter) { - wait_queue_head_t *q = page_waitqueue(page); + wait_queue_head_t *q = folio_waitqueue(folio); unsigned long flags; spin_lock_irqsave(&q->lock, flags); __add_wait_queue_entry_tail(q, waiter); - SetPageWaiters(page); + SetFolioWaiters(folio); spin_unlock_irqrestore(&q->lock, flags); } -EXPORT_SYMBOL_GPL(add_page_wait_queue); +EXPORT_SYMBOL_GPL(add_folio_wait_queue); #ifndef clear_bit_unlock_is_negative_byte @@ -1550,10 +1551,10 @@ EXPORT_SYMBOL_GPL(__lock_folio_killable); static int __lock_folio_async(struct folio *folio, struct wait_page_queue *wait) { - struct wait_queue_head *q = page_waitqueue(&folio->page); + struct wait_queue_head *q = folio_waitqueue(folio); int ret = 0; - wait->page = &folio->page; + wait->folio = folio; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); From patchwork Sat Mar 20 05:41:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12152301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9F56C433DB for ; Sat, 20 Mar 2021 05:46:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75E856197D for ; Sat, 20 Mar 2021 05:46:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230280AbhCTFpq (ORCPT ); Sat, 20 Mar 2021 01:45:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229672AbhCTFpl (ORCPT ); Sat, 20 Mar 2021 01:45:41 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BF48C061762; Fri, 19 Mar 2021 22:45:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WOmo33gIE1uQvr79REoxKRtBf91u6djs8UiAd3KaghI=; b=KQ5QFeZeSiBnOgTSlzeiu4Z/z2 IB6dMTS8aIFxS04DqMHPQTj1LDqdrStNk3XokvkM30SL0zSRlFE96HYMzfZZCTdcHmIzHNfFfktFn /VsQJIyRlkR5HY4RQLQ9z3mKmFvC4ZxyjM/5TvKT/2yw5I9rGGIE1lv5PvkEjppw6dKkCnASc5LtR W/qDni76+VLBhfL8C9Vepu6KhHV+4rWDfwKOv7tcJJ2Za/hthVCtPapdX8xgbp3QWH8aKCsfDZeHG sqvtbLVRHZCnDq2dea1M/7zUjiEjKOQdxraw3NwmCs/fa+4W/pbFzf02lQHPQdmvxJSlWS5xWMenq jZeQ/d9w==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUPp-005SnA-El; Sat, 20 Mar 2021 05:44:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 27/27] mm/doc: Build kerneldoc for various mm files Date: Sat, 20 Mar 2021 05:41:04 +0000 Message-Id: <20210320054104.1300774-28-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org These files weren't included in the html docs. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/core-api/mm-api.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 874ae1250258..3af5875a1d9e 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -93,3 +93,10 @@ More Memory Management Functions .. kernel-doc:: mm/page_alloc.c .. kernel-doc:: mm/mempolicy.c + +.. kernel-doc:: include/linux/mm_types.h + :internal: +.. kernel-doc:: include/linux/mm.h + :internal: +.. kernel-doc:: mm/util.c + :functions: folio_mapping