From patchwork Sun Jun 5 19:38:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54C76CCA484 for ; Sun, 5 Jun 2022 19:39:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347218AbiFETjJ (ORCPT ); Sun, 5 Jun 2022 15:39:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347115AbiFETjG (ORCPT ); Sun, 5 Jun 2022 15:39:06 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C3C21181A; Sun, 5 Jun 2022 12:39:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6g09y1A9jVwuZlJR1irGeBqDpd31Ui3pSGWrlaz4nb0=; b=pg4gnoTyTVQMZkuLsLjufELZOm zdlkyM6Epz9Y5ls6kjBxq58ihNIUrb2VqIKn1nUlONRVxbp4g7dlQzMHo9JPUvh1cyx4cdcigWDI1 slwAgDoI7SkWyyzH+j97LCItJ0qSvavcuPnYPdS5zdoFT4aZpCiGwaGGj7aUofkedaHvtjKzyVrNI 42u9ldindn05Wm1yrBmLckPHwYi8R4I2j0QnGjrpg23633uwpEOoaC3S55ItOiGyvCgkUjK/1UElz ks1lQWAPi22UiL9YIdwqGaBwOialRfaScxUmJwyto5dh3WMxU2Ag123Wygqx+r/Kt4P8V7BgYZyFl 43sDr6cA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5Q-009wsP-JZ; Sun, 05 Jun 2022 19:38:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 01/10] filemap: Add filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:45 +0100 Message-Id: <20220605193854.2371230-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is the equivalent of find_get_pages() but fills a folio_batch instead of an array of pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/pagemap.h | 2 ++ mm/filemap.c | 55 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 5555689ea809..50e57b2d845f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -718,6 +718,8 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index) return head + (index & (thp_nr_pages(head) - 1)); } +unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, + pgoff_t end, struct folio_batch *fbatch); unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, pgoff_t end, unsigned int nr_pages, struct page **pages); diff --git a/mm/filemap.c b/mm/filemap.c index 1e66eea98a7e..ea4145b7a84c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2127,6 +2127,61 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, return folio_batch_count(fbatch); } +/** + * filemap_get_folios - Get a batch of folios + * @mapping: The address_space to search + * @start: The starting page index + * @end: The final page index (inclusive) + * @fbatch: The batch to fill. + * + * Search for and return a batch of folios in the mapping starting at + * index @start and up to index @end (inclusive). The folios are returned + * in @fbatch with an elevated reference count. + * + * The first folio may start before @start; if it does, it will contain + * @start. The final folio may extend beyond @end; if it does, it will + * contain @end. The folios have ascending indices. There may be gaps + * between the folios if there are indices which have no folio in the + * page cache. If folios are added to or removed from the page cache + * while this is running, they may or may not be found by this call. + * + * Return: The number of folios which were found. + * We also update @start to index the next folio for the traversal. + */ +unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, + pgoff_t end, struct folio_batch *fbatch) +{ + XA_STATE(xas, &mapping->i_pages, *start); + struct folio *folio; + + rcu_read_lock(); + while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { + /* Skip over shadow, swap and DAX entries */ + if (xa_is_value(folio)) + continue; + if (!folio_batch_add(fbatch, folio)) { + *start = folio->index + folio_nr_pages(folio); + goto out; + } + } + + /* + * We come here when there is no page beyond @end. We take care to not + * overflow the index @start as it confuses some of the callers. This + * breaks the iteration when there is a page at index -1 but that is + * already broken anyway. + */ + if (end == (pgoff_t)-1) + *start = (pgoff_t)-1; + else + *start = end + 1; +out: + rcu_read_unlock(); + + return folio_batch_count(fbatch); +} +EXPORT_SYMBOL(filemap_get_folios); + static inline bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) { From patchwork Sun Jun 5 19:38:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C6E9CCA483 for ; Sun, 5 Jun 2022 19:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351583AbiFETjQ (ORCPT ); Sun, 5 Jun 2022 15:39:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347193AbiFETjJ (ORCPT ); Sun, 5 Jun 2022 15:39:09 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F43E12082; Sun, 5 Jun 2022 12:39:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qbG5jYA5/B/9NK52X0Zwpy+RVhS8Khdm3bEIuVIcSSs=; b=dOOkwE/qwu6ai6SUG6KU2Tcj1b tYf86pT1eVKho3dRG7hvwfX/ICj+XRb4tRZDY8EXwKCl7ilsNUd9anX0RT70ecMt1/vbB4uTsJa03 6G9046ewtZLLSDSY86dJSRTdn3qjPyrzchTDI7Vi8EL0DNHGZ/8j/SWIP6c9Huez/h5GoE/7d8YU3 sKyP6kBMp9MY/fjj7QQgCy9/z7ux3TGP37ac4Purw7SgOVkW3wBetmcl1SZADajRYm/kr0rUaF6vt VdvVDgkAGQvgMjThr4db/KHBEvWcsMQrBgqzAprcRIAtIkegr3AoYkWgpMhBM6RcwtgML5fjs8gL0 j/U8tA5A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5Q-009wsR-MI; Sun, 05 Jun 2022 19:38:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 02/10] buffer: Convert clean_bdev_aliases() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:46 +0100 Message-Id: <20220605193854.2371230-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use a folio throughout this function. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/buffer.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 898c7f301b1b..276769d3715a 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1604,7 +1604,7 @@ void clean_bdev_aliases(struct block_device *bdev, sector_t block, sector_t len) { struct inode *bd_inode = bdev->bd_inode; struct address_space *bd_mapping = bd_inode->i_mapping; - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t index = block >> (PAGE_SHIFT - bd_inode->i_blkbits); pgoff_t end; int i, count; @@ -1612,24 +1612,24 @@ void clean_bdev_aliases(struct block_device *bdev, sector_t block, sector_t len) struct buffer_head *head; end = (block + len - 1) >> (PAGE_SHIFT - bd_inode->i_blkbits); - pagevec_init(&pvec); - while (pagevec_lookup_range(&pvec, bd_mapping, &index, end)) { - count = pagevec_count(&pvec); + folio_batch_init(&fbatch); + while (filemap_get_folios(bd_mapping, &index, end, &fbatch)) { + count = folio_batch_count(&fbatch); for (i = 0; i < count; i++) { - struct page *page = pvec.pages[i]; + struct folio *folio = fbatch.folios[i]; - if (!page_has_buffers(page)) + if (!folio_buffers(folio)) continue; /* - * We use page lock instead of bd_mapping->private_lock + * We use folio lock instead of bd_mapping->private_lock * to pin buffers here since we can afford to sleep and * it scales better than a global spinlock lock. */ - lock_page(page); - /* Recheck when the page is locked which pins bhs */ - if (!page_has_buffers(page)) + folio_lock(folio); + /* Recheck when the folio is locked which pins bhs */ + head = folio_buffers(folio); + if (!head) goto unlock_page; - head = page_buffers(page); bh = head; do { if (!buffer_mapped(bh) || (bh->b_blocknr < block)) @@ -1643,9 +1643,9 @@ void clean_bdev_aliases(struct block_device *bdev, sector_t block, sector_t len) bh = bh->b_this_page; } while (bh != head); unlock_page: - unlock_page(page); + folio_unlock(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); /* End of range already reached? */ if (index > end || !index) From patchwork Sun Jun 5 19:38:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA4FEC433EF for ; Sun, 5 Jun 2022 19:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346839AbiFETjD (ORCPT ); Sun, 5 Jun 2022 15:39:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbiFETjC (ORCPT ); Sun, 5 Jun 2022 15:39:02 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3ACF865F8; Sun, 5 Jun 2022 12:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7zHJFf7s+ZSx87nHg6Bs72jjNt/zTHbg4PZc0KhFNBI=; b=EYXV63i9/CEOzD0nHMe1OQHYDc mVCuEz0iq05DzWOTfwUUXon7lLclyJcGJZDFVFf36KaefdWNV0pHzAWE/juTBOh3LmOa6/GHAbwsA j9SKPw7wOEy++DynzZkxdw6cKJT5aOgxtQMtWaAQNIKZNtcW6JUvraw6AebVxfjYl0uWLN4Tnkrf1 g5r1qFvxh5lUydZ/EqFXJtmm4X6GU4Pvr6z+zZzlpoy3rKdSu3dVWHKmxHNTzyLs4u7zgx7YRImNZ s12tQFfwrO2lznSTczeoD//K1wj12QguV4NYS7DRw7Y1XFPPUFJTnVvqq+s0OPozePCioZdM8Nwv2 JJEs8DoA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5Q-009wsT-OG; Sun, 05 Jun 2022 19:38:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 03/10] ext4: Convert mpage_release_unused_pages() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:47 +0100 Message-Id: <20220605193854.2371230-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org If the folio is large, it may overlap the beginning or end of the unused range. If it does, we need to avoid invalidating it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ext4/inode.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 3dce7d058985..32a7f5e024d6 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1554,9 +1554,9 @@ struct mpage_da_data { static void mpage_release_unused_pages(struct mpage_da_data *mpd, bool invalidate) { - int nr_pages, i; + unsigned nr, i; pgoff_t index, end; - struct pagevec pvec; + struct folio_batch fbatch; struct inode *inode = mpd->inode; struct address_space *mapping = inode->i_mapping; @@ -1574,15 +1574,18 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd, ext4_es_remove_extent(inode, start, last - start + 1); } - pagevec_init(&pvec); + folio_batch_init(&fbatch); while (index <= end) { - nr_pages = pagevec_lookup_range(&pvec, mapping, &index, end); - if (nr_pages == 0) + nr = filemap_get_folios(mapping, &index, end, &fbatch); + if (nr == 0) break; - for (i = 0; i < nr_pages; i++) { - struct page *page = pvec.pages[i]; - struct folio *folio = page_folio(page); + for (i = 0; i < nr; i++) { + struct folio *folio = fbatch.folios[i]; + if (folio->index < mpd->first_page) + continue; + if (folio->index + folio_nr_pages(folio) - 1 > end) + continue; BUG_ON(!folio_test_locked(folio)); BUG_ON(folio_test_writeback(folio)); if (invalidate) { @@ -1594,7 +1597,7 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd, } folio_unlock(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); } } From patchwork Sun Jun 5 19:38:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6411ECCA482 for ; Sun, 5 Jun 2022 19:39:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351564AbiFETjP (ORCPT ); Sun, 5 Jun 2022 15:39:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245092AbiFETjM (ORCPT ); Sun, 5 Jun 2022 15:39:12 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EB7339694; Sun, 5 Jun 2022 12:39:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W0H1cUQxatSgPZ6f11mLW4N05/pdthUaWOiji2r395Q=; b=vuZO7CSb7H6HQ4un97k5YqfJpv WRg4mesPevoGA4XIOIVE2ut/fbhfu85CTaveBcVULnFub7Mfq5oIGpDukf0GFFOtdms/yjo0srNbQ qAmlTagDosQ/iwYURVheI7Qn+0e+JiM35saYHuorbgNvOmwFZTGBSMX9gPqyPwtmqTCUpQrgxt08L zl+fZroMOE3CbrwEnxjVw9xWa9I9OBtn3/LlANUrlXOhUPAwmGVpiC8vKuDk0hB0q5d8Esreir30M qp+56a/8G270ZD18zh3VnRwQbfUZOj9Qy+FiUOVDgrzK9OgPFG3nJ+7PP+FJ4hse3IB4PQ0w6ecGA OfEX515Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5Q-009wsV-Qh; Sun, 05 Jun 2022 19:38:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 04/10] ext4: Convert mpage_map_and_submit_buffers() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:48 +0100 Message-Id: <20220605193854.2371230-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The called functions all use pages, so just convert back to a page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ext4/inode.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 32a7f5e024d6..1aaea53e67b5 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2314,8 +2314,8 @@ static int mpage_process_page(struct mpage_da_data *mpd, struct page *page, */ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) { - struct pagevec pvec; - int nr_pages, i; + struct folio_batch fbatch; + unsigned nr, i; struct inode *inode = mpd->inode; int bpp_bits = PAGE_SHIFT - inode->i_blkbits; pgoff_t start, end; @@ -2329,14 +2329,13 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) lblk = start << bpp_bits; pblock = mpd->map.m_pblk; - pagevec_init(&pvec); + folio_batch_init(&fbatch); while (start <= end) { - nr_pages = pagevec_lookup_range(&pvec, inode->i_mapping, - &start, end); - if (nr_pages == 0) + nr = filemap_get_folios(inode->i_mapping, &start, end, &fbatch); + if (nr == 0) break; - for (i = 0; i < nr_pages; i++) { - struct page *page = pvec.pages[i]; + for (i = 0; i < nr; i++) { + struct page *page = &fbatch.folios[i]->page; err = mpage_process_page(mpd, page, &lblk, &pblock, &map_bh); @@ -2352,14 +2351,14 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) if (err < 0) goto out; } - pagevec_release(&pvec); + folio_batch_release(&fbatch); } /* Extent fully mapped and matches with page boundary. We are done. */ mpd->map.m_len = 0; mpd->map.m_flags = 0; return 0; out: - pagevec_release(&pvec); + folio_batch_release(&fbatch); return err; } From patchwork Sun Jun 5 19:38:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33212CCA47E for ; Sun, 5 Jun 2022 19:39:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239254AbiFETjy (ORCPT ); Sun, 5 Jun 2022 15:39:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351659AbiFETjo (ORCPT ); Sun, 5 Jun 2022 15:39:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 967844DF47; Sun, 5 Jun 2022 12:39:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4+VTXobun92H06BwkM9sboSswxKOXWma+MeYkEepBgA=; b=RQCVcbFFYG/IwPkZ92IOtlvbMm az7RIzZuEXd7sNL2U7G8LPCRngD4SynAZcKmm6Z95io0HrbGL0vnB7fjGo7Cxu4ZBw1ODBrU0UG2e HxKcshRvctwxq6I6s6/MK32FsYRAC2G0u/Hip2HcdGDzdI618anISANI/MA2khqP9eLitiytzdMDB ct6KPLFs4aDjPFKc8Pkgohr+9RyESHl2CJp7vIumKABXwMvMmL913gy1dYEy+Y+5rn3L6A6SydWN3 3u72ARdSAix5xot7Gj5q+zC981acWBZoknB7PiY9jRjVbOJiq7giOgLEaPV78mOWST9iLtfFjKdrU TBXjCXKw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5Q-009wsX-Tg; Sun, 05 Jun 2022 19:38:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 05/10] f2fs: Convert f2fs_invalidate_compress_pages() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:49 +0100 Message-Id: <20220605193854.2371230-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Convert this function to use folios throughout. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Chao Yu --- fs/f2fs/compress.c | 35 +++++++++++++++-------------------- 1 file changed, 15 insertions(+), 20 deletions(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 24824cd96f36..009e6c519e98 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -1832,45 +1832,40 @@ bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino) { struct address_space *mapping = sbi->compress_inode->i_mapping; - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t index = 0; pgoff_t end = MAX_BLKADDR(sbi); if (!mapping->nrpages) return; - pagevec_init(&pvec); + folio_batch_init(&fbatch); do { - unsigned int nr_pages; - int i; + unsigned int nr, i; - nr_pages = pagevec_lookup_range(&pvec, mapping, - &index, end - 1); - if (!nr_pages) + nr = filemap_get_folios(mapping, &index, end - 1, &fbatch); + if (!nr) break; - for (i = 0; i < nr_pages; i++) { - struct page *page = pvec.pages[i]; - - if (page->index > end) - break; + for (i = 0; i < nr; i++) { + struct folio *folio = fbatch.folios[i]; - lock_page(page); - if (page->mapping != mapping) { - unlock_page(page); + folio_lock(folio); + if (folio->mapping != mapping) { + folio_unlock(folio); continue; } - if (ino != get_page_private_data(page)) { - unlock_page(page); + if (ino != get_page_private_data(&folio->page)) { + folio_unlock(folio); continue; } - generic_error_remove_page(mapping, page); - unlock_page(page); + generic_error_remove_page(mapping, &folio->page); + folio_unlock(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); } while (index < end); } From patchwork Sun Jun 5 19:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56B0DCCA47B for ; Sun, 5 Jun 2022 19:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348582AbiFETjS (ORCPT ); Sun, 5 Jun 2022 15:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351574AbiFETjP (ORCPT ); Sun, 5 Jun 2022 15:39:15 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43EB34D637; Sun, 5 Jun 2022 12:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7vrbND8cI4Dlq/u8PQe/DoQG1zXQAC8bnQq3oGQZlCM=; b=nyyxz3de/IeOoMboCtVOVL6Bfa VfKn+9Zxs/tns4f9iK0uXrzCv3nCmGwggtuSAUAtlPGnuq7icdE5HNPJH04XGNi7c1z8uo9aOBl13 kJWUC0wwP6r24Wz1QxsuEIRQfaOEnZ3Dg+XUj8w0YlohN8gP7zUJj2pOjg7y43Pr//LpiWEXb5yqJ K6B5rwHIPhpnhCcdLsO2JTI3svxYEAP6mohHhTayjo4MzQ4RVRW6FirahM6gza6gHsplOlL7+7AeH NGJiKKqHk6YUBLdFV8TQaFaygjaxQdpQwBZOim9CpkDAEyupcW8eJHdlsziy2Jx6OO5JpEWzDeF8Q O5jHuBxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5R-009wsZ-07; Sun, 05 Jun 2022 19:38:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 06/10] hugetlbfs: Convert remove_inode_hugepages() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:50 +0100 Message-Id: <20220605193854.2371230-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use folios throughout this function. That removes the last caller of huge_pagevec_release(), so delete that too. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/hugetlbfs/inode.c | 44 ++++++++++++++------------------------------ 1 file changed, 14 insertions(+), 30 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index ae2524480f23..14d33f725e05 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -108,16 +108,6 @@ static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma) } #endif -static void huge_pagevec_release(struct pagevec *pvec) -{ - int i; - - for (i = 0; i < pagevec_count(pvec); ++i) - put_page(pvec->pages[i]); - - pagevec_reinit(pvec); -} - /* * Mask used when checking the page offset value passed in via system * calls. This value will be converted to a loff_t which is signed. @@ -480,25 +470,19 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, struct address_space *mapping = &inode->i_data; const pgoff_t start = lstart >> huge_page_shift(h); const pgoff_t end = lend >> huge_page_shift(h); - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t next, index; int i, freed = 0; bool truncate_op = (lend == LLONG_MAX); - pagevec_init(&pvec); + folio_batch_init(&fbatch); next = start; - while (next < end) { - /* - * When no more pages are found, we are done. - */ - if (!pagevec_lookup_range(&pvec, mapping, &next, end - 1)) - break; - - for (i = 0; i < pagevec_count(&pvec); ++i) { - struct page *page = pvec.pages[i]; + while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { + for (i = 0; i < folio_batch_count(&fbatch); ++i) { + struct folio *folio = fbatch.folios[i]; u32 hash = 0; - index = page->index; + index = folio->index; if (!truncate_op) { /* * Only need to hold the fault mutex in the @@ -511,15 +495,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, } /* - * If page is mapped, it was faulted in after being + * If folio is mapped, it was faulted in after being * unmapped in caller. Unmap (again) now after taking * the fault mutex. The mutex will prevent faults - * until we finish removing the page. + * until we finish removing the folio. * * This race can only happen in the hole punch case. * Getting here in a truncate operation is a bug. */ - if (unlikely(page_mapped(page))) { + if (unlikely(folio_mapped(folio))) { BUG_ON(truncate_op); mutex_unlock(&hugetlb_fault_mutex_table[hash]); @@ -532,7 +516,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, i_mmap_unlock_write(mapping); } - lock_page(page); + folio_lock(folio); /* * We must free the huge page and remove from page * cache (remove_huge_page) BEFORE removing the @@ -542,8 +526,8 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, * the subpool and global reserve usage count can need * to be adjusted. */ - VM_BUG_ON(HPageRestoreReserve(page)); - remove_huge_page(page); + VM_BUG_ON(HPageRestoreReserve(&folio->page)); + remove_huge_page(&folio->page); freed++; if (!truncate_op) { if (unlikely(hugetlb_unreserve_pages(inode, @@ -551,11 +535,11 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, hugetlb_fix_reserve_counts(inode); } - unlock_page(page); + folio_unlock(folio); if (!truncate_op) mutex_unlock(&hugetlb_fault_mutex_table[hash]); } - huge_pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); } From patchwork Sun Jun 5 19:38:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9FF8CCA47E for ; Sun, 5 Jun 2022 19:39:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347130AbiFETjG (ORCPT ); Sun, 5 Jun 2022 15:39:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346462AbiFETjD (ORCPT ); Sun, 5 Jun 2022 15:39:03 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 424061181A; Sun, 5 Jun 2022 12:39:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eM6uKV+5ElE/ut/Cuc850ZV6AcTLZYIP033Wzko7MpA=; b=jw54R9vR5RZy9/he5CXHQlyFzw WYLQMbziLOy2S6/zd4nVgc2UPNgndelLoDnOz9B0h2hq2awUcQeux+YLGTLUOy6yY88ojJb4VqImT BdJhOFHQIHogBZx79rFympvKsdcS00RpUDNLrO41Q1KgQnJ3iFBBK9/ycC49M6c9x0ilXYsgzTFjq 4M2AynY09RE7pLDC7wNMqh6uUkgDO0cK8SU9nOwxTGqhdbe51uxm7r4dsSqEZYp+pFoAmUhNlFnfR XUA+CO9svYVg4sD4a0sRDrWreBBx8ExZjOsvyFZpDcKYT7mvMK0NVRy7GTxV1W17OgLNV7TCmPpzI BHlN4vrQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5R-009wsb-2q; Sun, 05 Jun 2022 19:38:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 07/10] nilfs2: Convert nilfs_copy_back_pages() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:51 +0100 Message-Id: <20220605193854.2371230-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use folios throughout. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Ryusuke Konishi Reviewed-by: Christoph Hellwig --- fs/nilfs2/page.c | 60 ++++++++++++++++++++++++------------------------ 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c index a8e88cc38e16..3267e96c256c 100644 --- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -294,57 +294,57 @@ int nilfs_copy_dirty_pages(struct address_space *dmap, void nilfs_copy_back_pages(struct address_space *dmap, struct address_space *smap) { - struct pagevec pvec; + struct folio_batch fbatch; unsigned int i, n; - pgoff_t index = 0; + pgoff_t start = 0; - pagevec_init(&pvec); + folio_batch_init(&fbatch); repeat: - n = pagevec_lookup(&pvec, smap, &index); + n = filemap_get_folios(smap, &start, ~0UL, &fbatch); if (!n) return; - for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i], *dpage; - pgoff_t offset = page->index; - - lock_page(page); - dpage = find_lock_page(dmap, offset); - if (dpage) { - /* overwrite existing page in the destination cache */ - WARN_ON(PageDirty(dpage)); - nilfs_copy_page(dpage, page, 0); - unlock_page(dpage); - put_page(dpage); - /* Do we not need to remove page from smap here? */ + for (i = 0; i < folio_batch_count(&fbatch); i++) { + struct folio *folio = fbatch.folios[i], *dfolio; + pgoff_t index = folio->index; + + folio_lock(folio); + dfolio = filemap_lock_folio(dmap, index); + if (dfolio) { + /* overwrite existing folio in the destination cache */ + WARN_ON(folio_test_dirty(dfolio)); + nilfs_copy_page(&dfolio->page, &folio->page, 0); + folio_unlock(dfolio); + folio_put(dfolio); + /* Do we not need to remove folio from smap here? */ } else { - struct page *p; + struct folio *f; - /* move the page to the destination cache */ + /* move the folio to the destination cache */ xa_lock_irq(&smap->i_pages); - p = __xa_erase(&smap->i_pages, offset); - WARN_ON(page != p); + f = __xa_erase(&smap->i_pages, index); + WARN_ON(folio != f); smap->nrpages--; xa_unlock_irq(&smap->i_pages); xa_lock_irq(&dmap->i_pages); - p = __xa_store(&dmap->i_pages, offset, page, GFP_NOFS); - if (unlikely(p)) { + f = __xa_store(&dmap->i_pages, index, folio, GFP_NOFS); + if (unlikely(f)) { /* Probably -ENOMEM */ - page->mapping = NULL; - put_page(page); + folio->mapping = NULL; + folio_put(folio); } else { - page->mapping = dmap; + folio->mapping = dmap; dmap->nrpages++; - if (PageDirty(page)) - __xa_set_mark(&dmap->i_pages, offset, + if (folio_test_dirty(folio)) + __xa_set_mark(&dmap->i_pages, index, PAGECACHE_TAG_DIRTY); } xa_unlock_irq(&dmap->i_pages); } - unlock_page(page); + folio_unlock(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); goto repeat; From patchwork Sun Jun 5 19:38:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89B09C43334 for ; Sun, 5 Jun 2022 19:39:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351678AbiFETjs (ORCPT ); Sun, 5 Jun 2022 15:39:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351630AbiFETjf (ORCPT ); Sun, 5 Jun 2022 15:39:35 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76D1D4DF50; Sun, 5 Jun 2022 12:39:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OKfTT3lHI1ZyVlt5mIkzU0UCspOZGM/nu3wRRwSA884=; b=oCy+OeGRtDxhYjiC1O/zCFlM1E +GLKQJ/W+sccDPBLjY3er2VqLfMCEOwVG2399jOMHXumUU9OHGAIixEoQOi/SpN3be9JrXvF16w/v Exl+YnA2i8gI7xqdnzTxw1MPX9mIztUvT6d28MoZlCWW9XP6JT+0/LaRG5ZBAyAOHPy4t3LVhruNY x/DFbMg1kb2C6BevSb31Bs4yqYOuNyGLM6eCChRa6QwSRaO+Gdj3Xk/rAclmb0Oonj5FrYQ2yaWx0 x0PkYd/DRK5vRnOuPNzfMMOJ4i8jB1fSoAiI39Kk4myb97CWXL7S1aAlZ72vXZPLnBJ7fQi0obEck +vvpE4IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5R-009wsd-5Z; Sun, 05 Jun 2022 19:38:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 08/10] vmscan: Add check_move_unevictable_folios() Date: Sun, 5 Jun 2022 20:38:52 +0100 Message-Id: <20220605193854.2371230-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Change the guts of check_move_unevictable_pages() over to use folios and add check_move_unevictable_pages() as a wrapper. Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot --- include/linux/swap.h | 3 ++- mm/vmscan.c | 55 ++++++++++++++++++++++++++------------------ 2 files changed, 35 insertions(+), 23 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 0c0fed1b348f..8672a7123ccd 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -438,7 +438,8 @@ static inline bool node_reclaim_enabled(void) return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); } -extern void check_move_unevictable_pages(struct pagevec *pvec); +void check_move_unevictable_folios(struct folio_batch *fbatch); +void check_move_unevictable_pages(struct pagevec *pvec); extern void kswapd_run(int nid); extern void kswapd_stop(int nid); diff --git a/mm/vmscan.c b/mm/vmscan.c index f7d9a683e3a7..5222c5ad600a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4790,45 +4790,56 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order) } #endif +void check_move_unevictable_pages(struct pagevec *pvec) +{ + struct folio_batch fbatch; + unsigned i; + + for (i = 0; i < pvec->nr; i++) { + struct page *page = pvec->pages[i]; + + if (PageTransTail(page)) + continue; + folio_batch_add(&fbatch, page_folio(page)); + } + check_move_unevictable_folios(&fbatch); +} +EXPORT_SYMBOL_GPL(check_move_unevictable_pages); + /** - * check_move_unevictable_pages - check pages for evictability and move to - * appropriate zone lru list - * @pvec: pagevec with lru pages to check + * check_move_unevictable_folios - Move evictable folios to appropriate zone + * lru list + * @fbatch: Batch of lru folios to check. * - * Checks pages for evictability, if an evictable page is in the unevictable + * Checks folios for evictability, if an evictable folio is in the unevictable * lru list, moves it to the appropriate evictable lru list. This function - * should be only used for lru pages. + * should be only used for lru folios. */ -void check_move_unevictable_pages(struct pagevec *pvec) +void check_move_unevictable_folios(struct folio_batch *fbatch) { struct lruvec *lruvec = NULL; int pgscanned = 0; int pgrescued = 0; int i; - for (i = 0; i < pvec->nr; i++) { - struct page *page = pvec->pages[i]; - struct folio *folio = page_folio(page); - int nr_pages; - - if (PageTransTail(page)) - continue; + for (i = 0; i < fbatch->nr; i++) { + struct folio *folio = fbatch->folios[i]; + int nr_pages = folio_nr_pages(folio); - nr_pages = thp_nr_pages(page); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + /* block memcg migration while the folio moves between lrus */ + if (!folio_test_clear_lru(folio)) continue; lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + if (folio_evictable(folio) && folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); pgrescued += nr_pages; } - SetPageLRU(page); + folio_set_lru(folio); } if (lruvec) { @@ -4839,4 +4850,4 @@ void check_move_unevictable_pages(struct pagevec *pvec) count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); } } -EXPORT_SYMBOL_GPL(check_move_unevictable_pages); +EXPORT_SYMBOL_GPL(check_move_unevictable_folios); From patchwork Sun Jun 5 19:38:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90128C43334 for ; Sun, 5 Jun 2022 19:39:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351596AbiFETjV (ORCPT ); Sun, 5 Jun 2022 15:39:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351592AbiFETjS (ORCPT ); Sun, 5 Jun 2022 15:39:18 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4882C4D62C; Sun, 5 Jun 2022 12:39:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nZd7FdfhHHpfkPZj2gtn0+UCb6ZHZaprTIaRzyl1jXk=; b=uFlx6IYCw7cKUMv6UFQU0NiN2j iQ24sV1L+N0JJ+PYt+whKlcyMvxX397wDiXAeuW16ipXpv7uj75S+ifah7UED1rGBmJ6INgUn0EaP +9Vs3ySStr9qIgZ0zF6RDK7wq0A7Z8UrY0v2gbCiqtYZYhxFzeNL/2ggxHjmv6qm+147lvrX2dSYA KrGPUqo5c0kBsr1HE5k1I7rfjNq5kWWJNZbU41W0uuZJbmiuIPzLxEOUcBBG4y5IcjIH3D5ekaRwS R4P3XSnF1YkMw+gsOSK1a5dkmx/misM3DKGpZBhb7qWzj2lws1FAo2SU2BuaoFyWnchKohsh2xIzm dolxg0Dg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5R-009wsf-8D; Sun, 05 Jun 2022 19:38:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 09/10] shmem: Convert shmem_unlock_mapping() to use filemap_get_folios() Date: Sun, 5 Jun 2022 20:38:53 +0100 Message-Id: <20220605193854.2371230-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is a straightforward conversion. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/shmem.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 60fdfc0208fd..313ae7df59d8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -867,18 +867,17 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma) */ void shmem_unlock_mapping(struct address_space *mapping) { - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t index = 0; - pagevec_init(&pvec); + folio_batch_init(&fbatch); /* * Minor point, but we might as well stop if someone else SHM_LOCKs it. */ - while (!mapping_unevictable(mapping)) { - if (!pagevec_lookup(&pvec, mapping, &index)) - break; - check_move_unevictable_pages(&pvec); - pagevec_release(&pvec); + while (!mapping_unevictable(mapping) && + filemap_get_folios(mapping, &index, ~0UL, &fbatch)) { + check_move_unevictable_folios(&fbatch); + folio_batch_release(&fbatch); cond_resched(); } } From patchwork Sun Jun 5 19:38:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12869839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67355CCA483 for ; Sun, 5 Jun 2022 19:39:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347248AbiFETjL (ORCPT ); Sun, 5 Jun 2022 15:39:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347104AbiFETjF (ORCPT ); Sun, 5 Jun 2022 15:39:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08E2A65F8; Sun, 5 Jun 2022 12:39:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kW7acvpugaZzYgKpNVxP3YXlqqo8ej9HniApAt9C7Wk=; b=GayB5dkNyETT1aj143hZpJ9xJr 2W3PmmqCmdsuLmQY9SEJ2F5XCz/p7+bPpCYBl4Z5FqoKlX63+mesk72QIxc8GbTFN6Vj7N693OPSz EAzmfYVOvCExzpQVT6Eo/3ppcpw0dDWOCBIJT0J9+VT3SAskMmWebD7iZwtwga09wxGXdZq+DsWH5 zr7rRtR8Ot38QThbkmEFP5sH8mKV2McSI3M5hohoBkIJ5y1yhfwz8O//gZN4YDvKQv5lqJ1I98SFm mt+aJZdCZBveBW7dWweDqSqXxJAfQ3m2/dVOOj384TYkN3SRcXmxaAndo0qADxxP6/AoDc9ozluPC 6SRvx5Ow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nxw5R-009wsh-Ac; Sun, 05 Jun 2022 19:38:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-nilfs@vger.kernel.org Subject: [PATCH 10/10] filemap: Remove find_get_pages_range() and associated functions Date: Sun, 5 Jun 2022 20:38:54 +0100 Message-Id: <20220605193854.2371230-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220605193854.2371230-1-willy@infradead.org> References: <20220605193854.2371230-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All callers of find_get_pages_range(), pagevec_lookup_range() and pagevec_lookup() have now been removed. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/pagemap.h | 3 -- include/linux/pagevec.h | 10 ------ mm/filemap.c | 67 ----------------------------------------- mm/swap.c | 29 ------------------ 4 files changed, 109 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 50e57b2d845f..1caccb9f99aa 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -720,9 +720,6 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index) unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch); -unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, - pgoff_t end, unsigned int nr_pages, - struct page **pages); unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start, unsigned int nr_pages, struct page **pages); unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 67b1246f136b..6649154a2115 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -27,16 +27,6 @@ struct pagevec { void __pagevec_release(struct pagevec *pvec); void __pagevec_lru_add(struct pagevec *pvec); -unsigned pagevec_lookup_range(struct pagevec *pvec, - struct address_space *mapping, - pgoff_t *start, pgoff_t end); -static inline unsigned pagevec_lookup(struct pagevec *pvec, - struct address_space *mapping, - pgoff_t *start) -{ - return pagevec_lookup_range(pvec, mapping, start, (pgoff_t)-1); -} - unsigned pagevec_lookup_range_tag(struct pagevec *pvec, struct address_space *mapping, pgoff_t *index, pgoff_t end, xa_mark_t tag); diff --git a/mm/filemap.c b/mm/filemap.c index ea4145b7a84c..340ccb37f6b6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2192,73 +2192,6 @@ bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) return index < folio->index + folio_nr_pages(folio) - 1; } -/** - * find_get_pages_range - gang pagecache lookup - * @mapping: The address_space to search - * @start: The starting page index - * @end: The final page index (inclusive) - * @nr_pages: The maximum number of pages - * @pages: Where the resulting pages are placed - * - * find_get_pages_range() will search for and return a group of up to @nr_pages - * pages in the mapping starting at index @start and up to index @end - * (inclusive). The pages are placed at @pages. find_get_pages_range() takes - * a reference against the returned pages. - * - * The search returns a group of mapping-contiguous pages with ascending - * indexes. There may be holes in the indices due to not-present pages. - * We also update @start to index the next page for the traversal. - * - * Return: the number of pages which were found. If this number is - * smaller than @nr_pages, the end of specified range has been - * reached. - */ -unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, - pgoff_t end, unsigned int nr_pages, - struct page **pages) -{ - XA_STATE(xas, &mapping->i_pages, *start); - struct folio *folio; - unsigned ret = 0; - - if (unlikely(!nr_pages)) - return 0; - - rcu_read_lock(); - while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { - /* Skip over shadow, swap and DAX entries */ - if (xa_is_value(folio)) - continue; - -again: - pages[ret] = folio_file_page(folio, xas.xa_index); - if (++ret == nr_pages) { - *start = xas.xa_index + 1; - goto out; - } - if (folio_more_pages(folio, xas.xa_index, end)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - } - - /* - * We come here when there is no page beyond @end. We take care to not - * overflow the index @start as it confuses some of the callers. This - * breaks the iteration when there is a page at index -1 but that is - * already broken anyway. - */ - if (end == (pgoff_t)-1) - *start = (pgoff_t)-1; - else - *start = end + 1; -out: - rcu_read_unlock(); - - return ret; -} - /** * find_get_pages_contig - gang contiguous pagecache lookup * @mapping: The address_space to search diff --git a/mm/swap.c b/mm/swap.c index f3922a96b2e9..f65e284247b2 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1086,35 +1086,6 @@ void folio_batch_remove_exceptionals(struct folio_batch *fbatch) fbatch->nr = j; } -/** - * pagevec_lookup_range - gang pagecache lookup - * @pvec: Where the resulting pages are placed - * @mapping: The address_space to search - * @start: The starting page index - * @end: The final page index - * - * pagevec_lookup_range() will search for & return a group of up to PAGEVEC_SIZE - * pages in the mapping starting from index @start and upto index @end - * (inclusive). The pages are placed in @pvec. pagevec_lookup() takes a - * reference against the pages in @pvec. - * - * The search returns a group of mapping-contiguous pages with ascending - * indexes. There may be holes in the indices due to not-present pages. We - * also update @start to index the next page for the traversal. - * - * pagevec_lookup_range() returns the number of pages which were found. If this - * number is smaller than PAGEVEC_SIZE, the end of specified range has been - * reached. - */ -unsigned pagevec_lookup_range(struct pagevec *pvec, - struct address_space *mapping, pgoff_t *start, pgoff_t end) -{ - pvec->nr = find_get_pages_range(mapping, start, end, PAGEVEC_SIZE, - pvec->pages); - return pagevec_count(pvec); -} -EXPORT_SYMBOL(pagevec_lookup_range); - unsigned pagevec_lookup_range_tag(struct pagevec *pvec, struct address_space *mapping, pgoff_t *index, pgoff_t end, xa_mark_t tag)