From patchwork Fri Mar 20 14:22:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11449297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20162139A for ; Fri, 20 Mar 2020 14:24:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F374120782 for ; Fri, 20 Mar 2020 14:24:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QajpwMES" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727670AbgCTOYS (ORCPT ); Fri, 20 Mar 2020 10:24:18 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:59838 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727243AbgCTOWe (ORCPT ); Fri, 20 Mar 2020 10:22:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=MaIiu0eo6Tm0AbLyJTHjYdmOkmV8nmBFzIMAEUro21Y=; b=QajpwMESKowijT05h11G4c5aGe dPn6D1z+ZFXOsRVMd//Lx5wjb5VBw4ENK9DbKo7KarLpo+RcPbofAdxcBQ6uiRm2tB8S5YaQVCSgV 66Y+/3AAv/BOo7ML5xEZIV/0pQ0qJW96y82kgOAlwtOUfJM3YkZ/41zOc0TkaPxhLuOpTb6fZtfTw +kpgo+A37L+ryN9JN/DrSNkE6v9AtgspWE37LGwcMv1WFR79xB6PaePub5biIYksS/CY/rqelsDzM a2tSfz8sCHFYS2Z6LoIPVzSgGw2jhsVVqgSb+SWJ0g1lCSjyigYORXyr/Gnfs9kZnbBfU/in3bAMJ DMOxNAKg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jFIXi-0000jC-1p; Fri, 20 Mar 2020 14:22:34 +0000 From: Matthew Wilcox To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, ocfs2-devel@oss.oracle.com, linux-xfs@vger.kernel.org, Gao Xiang , Dave Chinner , William Kucharski Subject: [PATCH v9 19/25] erofs: Convert compressed files from readpages to readahead Date: Fri, 20 Mar 2020 07:22:25 -0700 Message-Id: <20200320142231.2402-20-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200320142231.2402-1-willy@infradead.org> References: <20200320142231.2402-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: "Matthew Wilcox (Oracle)" Use the new readahead operation in erofs. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Gao Xiang Reviewed-by: Dave Chinner Reviewed-by: William Kucharski Reviewed-by: Chao Yu --- fs/erofs/zdata.c | 29 +++++++++-------------------- 1 file changed, 9 insertions(+), 20 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 17f45fcb8c5c..e64d8ab0900d 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -1303,28 +1303,23 @@ static bool should_decompress_synchronously(struct erofs_sb_info *sbi, return nr <= sbi->max_sync_decompress_pages; } -static int z_erofs_readpages(struct file *filp, struct address_space *mapping, - struct list_head *pages, unsigned int nr_pages) +static void z_erofs_readahead(struct readahead_control *rac) { - struct inode *const inode = mapping->host; + struct inode *const inode = rac->mapping->host; struct erofs_sb_info *const sbi = EROFS_I_SB(inode); - bool sync = should_decompress_synchronously(sbi, nr_pages); + bool sync = should_decompress_synchronously(sbi, readahead_count(rac)); struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode); - gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL); - struct page *head = NULL; + struct page *page, *head = NULL; LIST_HEAD(pagepool); - trace_erofs_readpages(mapping->host, lru_to_page(pages)->index, - nr_pages, false); + trace_erofs_readpages(inode, readahead_index(rac), + readahead_count(rac), false); - f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT; - - for (; nr_pages; --nr_pages) { - struct page *page = lru_to_page(pages); + f.headoffset = readahead_pos(rac); + while ((page = readahead_page(rac))) { prefetchw(&page->flags); - list_del(&page->lru); /* * A pure asynchronous readahead is indicated if @@ -1333,11 +1328,6 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping, */ sync &= !(PageReadahead(page) && !head); - if (add_to_page_cache_lru(page, mapping, page->index, gfp)) { - list_add(&page->lru, &pagepool); - continue; - } - set_page_private(page, (unsigned long)head); head = page; } @@ -1366,11 +1356,10 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping, /* clean up the remaining free pages */ put_pages_list(&pagepool); - return 0; } const struct address_space_operations z_erofs_aops = { .readpage = z_erofs_readpage, - .readpages = z_erofs_readpages, + .readahead = z_erofs_readahead, };