From patchwork Sat Feb 1 15:12:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361079 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C149139A for ; Sat, 1 Feb 2020 15:12:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED37A2073B for ; Sat, 1 Feb 2020 15:12:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="EsWLez9k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED37A2073B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB9C36B05FF; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B13646B0603; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AF976B0601; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 83F406B05FF for ; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3E4AB8248047 for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) X-FDA: 76441900332.04.lamp61_8c91e332f9160 X-Spam-Summary: 40,2.5,0,d13f86cdcde295eb,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:4321:5007:6261:6653:7576:10011:11026:11658:11914:12043:12114:12219:12296:12297:12438:12555:12663:12895:13869:13894:14096:14181:14394:14721:21080:21451:21627:21990:30034:30054:30074,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: lamp61_8c91e332f9160 X-Filterd-Recvd-Size: 3769 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7T6itBiApRciaky4TlalX8Xey/dXpsbhmN7GfbwBCQQ=; b=EsWLez9kTGd6EaoT9SAbd2LujH fmHq7OV1St1lovoCDhgD7eVFl1qY1QPAkRPH+0+6/eLviLQI9K115SysliSx+2JmgxuYVfWirMK5v q6gthZlrowrCEYkfyaaOTTq4yFGGQ2aDZuv2SLhb1CKh5SZVG5gj/zJqyTlUAdQPo+Y/3GvS6g3M/ 08Xth5WAAiF7zzjzCohSq6IaHrxffG92XKzowTCFaoj1QxrRyCQUO0Pu6eBeys9UZjbdebia9wVi7 /3D3m0mo+NHYkitIDnpR/K68huWv76tD5+XSl8lzy1RN4ONqA4m/JigpdHDksQHPaSAKPCusJ4ol8 IWO6ApeQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006HH-2G; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 01/12] mm: Fix the return type of __do_page_cache_readahead Date: Sat, 1 Feb 2020 07:12:29 -0800 Message-Id: <20200201151240.24082-2-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" ra_submit() which is a wrapper around __do_page_cache_readahead() already returns an unsigned long, and the 'nr_to_read' parameter is an unsigned long, so fix __do_page_cache_readahead() to return an unsigned long, even though I'm pretty sure we're not going to readahead more than 2^32 pages ever. Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 2 +- mm/readahead.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 3cf20ab3ca01..41b93c4b3ab7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,7 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -extern unsigned int __do_page_cache_readahead(struct address_space *mapping, +extern unsigned long __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, unsigned long lookahead_size); diff --git a/mm/readahead.c b/mm/readahead.c index 2fe72cd29b47..6bf73ef33b7e 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -152,7 +152,7 @@ static int read_pages(struct address_space *mapping, struct file *filp, * * Returns the number of pages requested, or the maximum amount of I/O allowed. */ -unsigned int __do_page_cache_readahead(struct address_space *mapping, +unsigned long __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, unsigned long lookahead_size) { @@ -161,7 +161,7 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping, unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); int page_idx; - unsigned int nr_pages = 0; + unsigned long nr_pages = 0; loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); From patchwork Sat Feb 1 15:12:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361085 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9CEA188B for ; Sat, 1 Feb 2020 15:12:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A681B2075B for ; Sat, 1 Feb 2020 15:12:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PSHjZFT3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A681B2075B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E8D446B0602; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CBDD96B0601; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAD186B0602; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id A17B66B05FF for ; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5B307824805A for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) X-FDA: 76441900332.09.snow39_8c9340ee8124a X-Spam-Summary: 2,0,0,a4b25915fc9ab4bf,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:4321:4419:5007:6261:6653:7576:9592:10004:11026:11658:11914:12043:12296:12297:12555:12679:12895:12986:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21451:21627:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: snow39_8c9340ee8124a X-Filterd-Recvd-Size: 3152 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=LKEoyUCzBuRkKfb9R2uO7LHx4Md5GqeUkIse+rddq2w=; b=PSHjZFT3UaRtT64bpletvme8Rc K2j3wXF48qoDQ6uGA2BOOCdJYUoHrYnzlxdck31h4khrVAWg7uPKvnisvqM1HpERW+tJoLP66SD+x edegMlZBASg9XS4VxEuFAfnN8NSUpTwCkFKFKbnpIqNtPLIh/SneM6MoeNbbsBf5z+FTnAXuETDS7 KJZx++gQlQ5Y3fcfSuCHUgq88LV2dB4R0GSNI3jJW/G15mWRBs43lNVHjZANxPLe8JuvDa8/GcTS0 uFsEYuGoXogighaPNchyhQCVADLkH9PsqKZBoTd8KNw2pg2R8chETC1JEvO0fBK8X4I+69KiIloOA ysRVmaDA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006HL-3D; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/12] readahead: Ignore return value of ->readpages Date: Sat, 1 Feb 2020 07:12:30 -0800 Message-Id: <20200201151240.24082-3-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" We used to assign the return value to a variable, which we then ignored. Remove the pretence of caring. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 6bf73ef33b7e..fc77d13af556 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -113,17 +113,16 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages, EXPORT_SYMBOL(read_cache_pages); -static int read_pages(struct address_space *mapping, struct file *filp, +static void read_pages(struct address_space *mapping, struct file *filp, struct list_head *pages, unsigned int nr_pages, gfp_t gfp) { struct blk_plug plug; unsigned page_idx; - int ret; blk_start_plug(&plug); if (mapping->a_ops->readpages) { - ret = mapping->a_ops->readpages(filp, mapping, pages, nr_pages); + mapping->a_ops->readpages(filp, mapping, pages, nr_pages); /* Clean up the remaining pages */ put_pages_list(pages); goto out; @@ -136,12 +135,9 @@ static int read_pages(struct address_space *mapping, struct file *filp, mapping->a_ops->readpage(filp, page); put_page(page); } - ret = 0; out: blk_finish_plug(&plug); - - return ret; } /* From patchwork Sat Feb 1 15:12:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361073 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7199B139A for ; Sat, 1 Feb 2020 15:12:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3557A20661 for ; Sat, 1 Feb 2020 15:12:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="sg8vRCPj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3557A20661 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E6CF6B05FE; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 396EB6B05FF; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AC496B0600; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 13F8A6B05FE for ; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C3784180AD807 for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) X-FDA: 76441900290.28.smoke11_8c783ab81ff2d X-Spam-Summary: 2,0,0,645b954a16e81084,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-btrfs@vger.kernel.org:linux-erofs@lists.ozlabs.org:linux-ext4@vger.kernel.org:linux-f2fs-devel@lists.sourceforge.net:linux-xfs@vger.kernel.org:cluster-devel@redhat.com:ocfs2-devel@oss.oracle.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2898:2918:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:4117:4321:5007:6261:6653:7576:7875:8957:10004:11026:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:13846:13894:14096:14181:14394:14721:21063:21080:21451:21627:21795:21990:30051:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFti me:23,LU X-HE-Tag: smoke11_8c783ab81ff2d X-Filterd-Recvd-Size: 6763 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6RY0UUjdaQhDdTRUN9wz8mjIn266EGtHTTMcbVMwTms=; b=sg8vRCPjK4w0XFjy7NZSoRgAsd sBqqUd+isaXbSIeuUVYJtJcfV1jXRDPy/1U16VGmZmw7bTQ2rLIGBTuQKAD/evbIN/2CGbqf86yup XZ76ZxqM71IlldJbmwKxuiwssBnvs5ewBtmJYaNdUdUO6dOPG3sciLrooImEXaiNT1xW8H07J2elq MLCqW63FKLppkTHRkeZA8wHWWlexMlCPdis108H2QqcdbP7xU7qfs0u4ZevSD35ABj+qvZpfKPRbf WCKm37R+Iue4NcPjCNjV5Q89mRHaRwM5fowkUIPipwzn+iV7HDCMv8T9DB6eFWwAwMXNJ5OgUQWmH 9h9aE2Jw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006HP-4H; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, cluster-devel@redhat.com, ocfs2-devel@oss.oracle.com Subject: [PATCH v4 03/12] readahead: Put pages in cache earlier Date: Sat, 1 Feb 2020 07:12:31 -0800 Message-Id: <20200201151240.24082-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" At allocation time, put the pages in the cache unless we're using ->readpages. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-btrfs@vger.kernel.org Cc: linux-erofs@lists.ozlabs.org Cc: linux-ext4@vger.kernel.org Cc: linux-f2fs-devel@lists.sourceforge.net Cc: linux-xfs@vger.kernel.org Cc: cluster-devel@redhat.com Cc: ocfs2-devel@oss.oracle.com --- mm/readahead.c | 64 ++++++++++++++++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 20 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index fc77d13af556..7daef0038b14 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -114,10 +114,10 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages, EXPORT_SYMBOL(read_cache_pages); static void read_pages(struct address_space *mapping, struct file *filp, - struct list_head *pages, unsigned int nr_pages, gfp_t gfp) + struct list_head *pages, pgoff_t start, + unsigned int nr_pages) { struct blk_plug plug; - unsigned page_idx; blk_start_plug(&plug); @@ -125,18 +125,17 @@ static void read_pages(struct address_space *mapping, struct file *filp, mapping->a_ops->readpages(filp, mapping, pages, nr_pages); /* Clean up the remaining pages */ put_pages_list(pages); - goto out; - } + } else { + struct page *page; + unsigned long index; - for (page_idx = 0; page_idx < nr_pages; page_idx++) { - struct page *page = lru_to_page(pages); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) + xa_for_each_range(&mapping->i_pages, index, page, start, + start + nr_pages - 1) { mapping->a_ops->readpage(filp, page); - put_page(page); + put_page(page); + } } -out: blk_finish_plug(&plug); } @@ -153,13 +152,14 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, unsigned long lookahead_size) { struct inode *inode = mapping->host; - struct page *page; unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); int page_idx; + pgoff_t page_offset; unsigned long nr_pages = 0; loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); + bool use_list = mapping->a_ops->readpages; if (isize == 0) goto out; @@ -170,21 +170,32 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, * Preallocate as many pages as we will need. */ for (page_idx = 0; page_idx < nr_to_read; page_idx++) { - pgoff_t page_offset = offset + page_idx; + struct page *page; + page_offset = offset + page_idx; if (page_offset > end_index) break; page = xa_load(&mapping->i_pages, page_offset); if (page && !xa_is_value(page)) { /* - * Page already present? Kick off the current batch of - * contiguous pages before continuing with the next - * batch. + * Page already present? Kick off the current batch + * of contiguous pages before continuing with the + * next batch. */ if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, - gfp_mask); + read_pages(mapping, filp, &page_pool, + page_offset - nr_pages, + nr_pages); + /* + * It's possible this page is the page we should + * be marking with PageReadahead. However, we + * don't have a stable ref to this page so it might + * be reallocated to another user before we can set + * the bit. There's probably another page in the + * cache marked with PageReadahead from the other + * process which accessed this file. + */ nr_pages = 0; continue; } @@ -192,8 +203,20 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, page = __page_cache_alloc(gfp_mask); if (!page) break; - page->index = page_offset; - list_add(&page->lru, &page_pool); + if (use_list) { + page->index = page_offset; + list_add(&page->lru, &page_pool); + } else if (add_to_page_cache_lru(page, mapping, page_offset, + gfp_mask) < 0) { + if (nr_pages) + read_pages(mapping, filp, &page_pool, + page_offset - nr_pages, + nr_pages); + put_page(page); + nr_pages = 0; + continue; + } + if (page_idx == nr_to_read - lookahead_size) SetPageReadahead(page); nr_pages++; @@ -205,7 +228,8 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, * will then handle the error. */ if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, gfp_mask); + read_pages(mapping, filp, &page_pool, page_offset - nr_pages, + nr_pages); BUG_ON(!list_empty(&page_pool)); out: return nr_pages; From patchwork Sat Feb 1 15:12:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361103 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E392413A4 for ; Sat, 1 Feb 2020 15:13:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A2D2F20842 for ; Sat, 1 Feb 2020 15:13:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bY3NBrXn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A2D2F20842 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5BA6E6B0606; Sat, 1 Feb 2020 10:12:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 545326B0607; Sat, 1 Feb 2020 10:12:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 282C26B0609; Sat, 1 Feb 2020 10:12:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 0641A6B0607 for ; Sat, 1 Feb 2020 10:12:50 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B07BD180AD80F for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) X-FDA: 76441900458.30.night20_8d0da90c74342 X-Spam-Summary: 2,0,0,3b93fce838d9b037,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-btrfs@vger.kernel.org:linux-erofs@lists.ozlabs.org:linux-ext4@vger.kernel.org:linux-f2fs-devel@lists.sourceforge.net:linux-xfs@vger.kernel.org:cluster-devel@redhat.com:ocfs2-devel@oss.oracle.com,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:1801:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:4049:4119:4250:4321:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9010:9036:10004:11026:11232:11473:11658:11914:12043:12295:12296:12297:12438:12555:12663:12679:12895:12986:13148:13230:13846:13894:14096:14394:21080:21451:21627:21972:21990:30003:30012:30034:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk, SPF:fn,M X-HE-Tag: night20_8d0da90c74342 X-Filterd-Recvd-Size: 8031 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Ml5Puu4JmLRdzMhiiCOiBo4ACbPsQdjTb+QSPxS+DKY=; b=bY3NBrXng43eEHaP4qFkNBHkjA /0ai0QxQPvj+vzTnehD7kfdwvHZ8vzjptHG3nMCtmSDzhJPLlrrRvHwMG04vHV9fd/CCJRTd7hZ9S kCY6iKxXzzldIpqfDcbhn1eEv9lpm3OwGdFjku0KJeD/QhITkP89Q8r/3MLB+25BxQPdL5yjJPCI/ zaoC9Cl+tOymw4aTP+dhrGrhEN9KAUrBXzsBIHlqEM/UfvzLlZxLPkERj8Ov2t+ljIWtVaj85jqMP h352PTKYgXTRiVZYfWwYCWlL9sM65Px7gR4w2uvH7jwuclAa5kmfsGcMYSDK+NcJWFvdA8OCVg7ot FV+G0YLQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006HT-5R; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, cluster-devel@redhat.com, ocfs2-devel@oss.oracle.com Subject: [PATCH v4 04/12] mm: Add readahead address space operation Date: Sat, 1 Feb 2020 07:12:32 -0800 Message-Id: <20200201151240.24082-5-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This replaces ->readpages with a saner interface: - Return the number of pages not read instead of an ignored error code. - Pages are already in the page cache when ->readahead is called. - Implementation looks up the pages in the page cache instead of having them passed in a linked list. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-btrfs@vger.kernel.org Cc: linux-erofs@lists.ozlabs.org Cc: linux-ext4@vger.kernel.org Cc: linux-f2fs-devel@lists.sourceforge.net Cc: linux-xfs@vger.kernel.org Cc: cluster-devel@redhat.com Cc: ocfs2-devel@oss.oracle.com --- Documentation/filesystems/locking.rst | 7 ++++++- Documentation/filesystems/vfs.rst | 14 ++++++++++++++ include/linux/fs.h | 2 ++ include/linux/pagemap.h | 12 ++++++++++++ mm/readahead.c | 13 ++++++++++++- 5 files changed, 46 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 5057e4d9dcd1..3d10729caf44 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -239,6 +239,8 @@ prototypes:: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + unsigned (*readahead)(struct file *, struct address_space *, + pgoff_t start, unsigned nr_pages); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -271,7 +273,8 @@ writepage: yes, unlocks (see below) readpage: yes, unlocks writepages: set_page_dirty no -readpages: +readahead: yes, unlocks +readpages: no write_begin: locks the page exclusive write_end: yes, unlocks exclusive bmap: @@ -295,6 +298,8 @@ the request handler (/dev/loop). ->readpage() unlocks the page, either synchronously or via I/O completion. +->readahead() unlocks the pages like ->readpage(). + ->readpages() populates the pagecache with the passed pages and starts I/O against them. They come unlocked upon I/O completion. diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index 7d4d09dd5e6d..c2bc345f2169 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -706,6 +706,8 @@ cache in your filesystem. The following members are defined: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + unsigned (*readahead)(struct file *filp, struct address_space *mapping, + pgoff_t start, unsigned nr_pages); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -781,6 +783,18 @@ cache in your filesystem. The following members are defined: If defined, it should set the PageDirty flag, and the PAGECACHE_TAG_DIRTY tag in the radix tree. +``readahead`` + Called by the VM to read pages associated with the address_space + object. The pages are consecutive in the page cache and + are locked. The implementation should decrement the page + refcount after attempting I/O on each page. Usually the + page will be unlocked by the I/O completion handler. If the + function does not attempt I/O on some pages, return the number + of pages which were not read so the caller can unlock the pages + for you. Set PageUptodate if the I/O completes successfully. + Setting PageError on any page will be ignored; simply unlock + the page if an I/O error occurs. + ``readpages`` called by the VM to read pages associated with the address_space object. This is essentially just a vector version of readpage. diff --git a/include/linux/fs.h b/include/linux/fs.h index 41584f50af0d..3bfc142e7d10 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -375,6 +375,8 @@ struct address_space_operations { */ int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); + unsigned (*readahead)(struct file *, struct address_space *, + pgoff_t start, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index ccb14b6a16b5..a2cf007826f2 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -630,6 +630,18 @@ static inline int add_to_page_cache(struct page *page, return error; } +/* + * Only call this from a ->readahead implementation. + */ +static inline +struct page *readahead_page(struct address_space *mapping, pgoff_t index) +{ + struct page *page = xa_load(&mapping->i_pages, index); + VM_BUG_ON_PAGE(!PageLocked(page), page); + + return page; +} + static inline unsigned long dir_pages(struct inode *inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> diff --git a/mm/readahead.c b/mm/readahead.c index 7daef0038b14..b2ed0baf3a5d 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping, struct file *filp, blk_start_plug(&plug); - if (mapping->a_ops->readpages) { + if (mapping->a_ops->readahead) { + unsigned left = mapping->a_ops->readahead(filp, mapping, + start, nr_pages); + + while (left) { + struct page *page = readahead_page(mapping, + start + nr_pages - left); + unlock_page(page); + put_page(page); + left--; + } + } else if (mapping->a_ops->readpages) { mapping->a_ops->readpages(filp, mapping, pages, nr_pages); /* Clean up the remaining pages */ put_pages_list(pages); From patchwork Sat Feb 1 15:12:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361093 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DEAE139A for ; Sat, 1 Feb 2020 15:13:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15D862073B for ; Sat, 1 Feb 2020 15:13:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="kHcweFg3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15D862073B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BAF296B0603; Sat, 1 Feb 2020 10:12:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B38A16B0604; Sat, 1 Feb 2020 10:12:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FE0D6B0605; Sat, 1 Feb 2020 10:12:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 79CDC6B0604 for ; Sat, 1 Feb 2020 10:12:47 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3FB748248047 for ; Sat, 1 Feb 2020 15:12:47 +0000 (UTC) X-FDA: 76441900374.18.cry91_8ca3f5e4e783a X-Spam-Summary: 2,0,0,b72ba370383dfb8b,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:cluster-devel@redhat.com:ocfs2-devel@oss.oracle.com,RULES_HIT:41:69:327:355:379:421:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2198:2199:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9010:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12663:12895:12986:13148:13149:13230:13846:13894:14096:14394:21080:21433:21451:21611:21627:21810:21987:21990:30003:30012:30034:30054:30070:30075,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: cry91_8ca3f5e4e783a X-Filterd-Recvd-Size: 27101 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1LLCz1pNDumPniuD8NMqAeewV61OAbX3vcg5OZy7VZM=; b=kHcweFg3FkMkcxnvEMHy/KOIGz mdP+N/egp3ttUB4bQsEYM5atagily88na2zeD1vaKxXOSU+gQjlYiZ+PtYQdcU19/MWUqqCa9pLXq 2g7xVpeBrlBvQWC2Bo2SO6AJ7BK7XttAO5X1/8MFw99JzjZTGvj1JzGJ5LMTtbogL9LG6x/dOQf/D wlftCVOqEskOU8uWM5+PNcBTXdAqrkeNaHGF9U8EmMZMmsbf5WQzV6C8nCXD+W+Ow1X3teueix6Bc m6WDvrzko9XbVoJU3kqqP+uKqCQHmVBaP/gnev8/fq76uVhXzQKCb/St79qw45lkeuUAvztg69/1y Sanway+g==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006HX-7I; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, cluster-devel@redhat.com, ocfs2-devel@oss.oracle.com Subject: [PATCH v4 05/12] fs: Convert mpage_readpages to mpage_readahead Date: Sat, 1 Feb 2020 07:12:33 -0800 Message-Id: <20200201151240.24082-6-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Implement the new readahead aop and convert all callers (block_dev, exfat, ext2, fat, gfs2, hpfs, isofs, jfs, nilfs2, ocfs2, omfs, qnx6, reiserfs & udf). The callers are all trivial except for GFS2 & OCFS2. Signed-off-by: Matthew Wilcox (Oracle) Cc: cluster-devel@redhat.com Cc: ocfs2-devel@oss.oracle.com --- drivers/staging/exfat/exfat_super.c | 9 ++++--- fs/block_dev.c | 9 ++++--- fs/ext2/inode.c | 12 ++++----- fs/fat/inode.c | 8 +++--- fs/gfs2/aops.c | 20 +++++++-------- fs/hpfs/file.c | 8 +++--- fs/iomap/buffered-io.c | 2 +- fs/isofs/inode.c | 9 ++++--- fs/jfs/inode.c | 8 +++--- fs/mpage.c | 38 ++++++++++------------------- fs/nilfs2/inode.c | 13 +++++----- fs/ocfs2/aops.c | 32 +++++++++++------------- fs/omfs/file.c | 8 +++--- fs/qnx6/inode.c | 8 +++--- fs/reiserfs/inode.c | 10 ++++---- fs/udf/inode.c | 8 +++--- include/linux/mpage.h | 2 +- mm/migrate.c | 2 +- 18 files changed, 96 insertions(+), 110 deletions(-) diff --git a/drivers/staging/exfat/exfat_super.c b/drivers/staging/exfat/exfat_super.c index b81d2a87b82e..4dbfd8c84a1b 100644 --- a/drivers/staging/exfat/exfat_super.c +++ b/drivers/staging/exfat/exfat_super.c @@ -3002,10 +3002,11 @@ static int exfat_readpage(struct file *file, struct page *page) return mpage_readpage(page, exfat_get_block); } -static int exfat_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned int nr_pages) +static +unsigned exfat_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned int nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, exfat_get_block); + return mpage_readahead(mapping, start, nr_pages, exfat_get_block); } static int exfat_writepage(struct page *page, struct writeback_control *wbc) @@ -3104,7 +3105,7 @@ static sector_t _exfat_bmap(struct address_space *mapping, sector_t block) static const struct address_space_operations exfat_aops = { .readpage = exfat_readpage, - .readpages = exfat_readpages, + .readahead = exfat_readahead, .writepage = exfat_writepage, .writepages = exfat_writepages, .write_begin = exfat_write_begin, diff --git a/fs/block_dev.c b/fs/block_dev.c index 69bf2fb6f7cd..826a5104ff56 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -614,10 +614,11 @@ static int blkdev_readpage(struct file * file, struct page * page) return block_read_full_page(page, blkdev_get_block); } -static int blkdev_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static +unsigned blkdev_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, blkdev_get_block); + return mpage_readahead(mapping, start, nr_pages, blkdev_get_block); } static int blkdev_write_begin(struct file *file, struct address_space *mapping, @@ -2062,7 +2063,7 @@ static int blkdev_writepages(struct address_space *mapping, static const struct address_space_operations def_blk_aops = { .readpage = blkdev_readpage, - .readpages = blkdev_readpages, + .readahead = blkdev_readahead, .writepage = blkdev_writepage, .write_begin = blkdev_write_begin, .write_end = blkdev_write_end, diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c index 119667e65890..0440eb9f24de 100644 --- a/fs/ext2/inode.c +++ b/fs/ext2/inode.c @@ -877,11 +877,11 @@ static int ext2_readpage(struct file *file, struct page *page) return mpage_readpage(page, ext2_get_block); } -static int -ext2_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned +ext2_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, ext2_get_block); + return mpage_readahead(mapping, start, nr_pages, ext2_get_block); } static int @@ -966,7 +966,7 @@ ext2_dax_writepages(struct address_space *mapping, struct writeback_control *wbc const struct address_space_operations ext2_aops = { .readpage = ext2_readpage, - .readpages = ext2_readpages, + .readahead = ext2_readahead, .writepage = ext2_writepage, .write_begin = ext2_write_begin, .write_end = ext2_write_end, @@ -980,7 +980,7 @@ const struct address_space_operations ext2_aops = { const struct address_space_operations ext2_nobh_aops = { .readpage = ext2_readpage, - .readpages = ext2_readpages, + .readahead = ext2_readahead, .writepage = ext2_nobh_writepage, .write_begin = ext2_nobh_write_begin, .write_end = nobh_write_end, diff --git a/fs/fat/inode.c b/fs/fat/inode.c index 594b05ae16c9..a671dc6a122a 100644 --- a/fs/fat/inode.c +++ b/fs/fat/inode.c @@ -210,10 +210,10 @@ static int fat_readpage(struct file *file, struct page *page) return mpage_readpage(page, fat_get_block); } -static int fat_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned fat_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, fat_get_block); + return mpage_readahead(mapping, start, nr_pages, fat_get_block); } static void fat_write_failed(struct address_space *mapping, loff_t to) @@ -344,7 +344,7 @@ int fat_block_truncate_page(struct inode *inode, loff_t from) static const struct address_space_operations fat_aops = { .readpage = fat_readpage, - .readpages = fat_readpages, + .readahead = fat_readahead, .writepage = fat_writepage, .writepages = fat_writepages, .write_begin = fat_write_begin, diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index ba83b49ce18c..5c6d89603f91 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -577,7 +577,7 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos, } /** - * gfs2_readpages - Read a bunch of pages at once + * gfs2_readahead - Read a bunch of pages at once * @file: The file to read from * @mapping: Address space info * @pages: List of pages to read @@ -590,16 +590,15 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos, * obviously not something we'd want to do on too regular a basis. * Any I/O we ignore at this time will be done via readpage later. * 2. We don't handle stuffed files here we let readpage do the honours. - * 3. mpage_readpages() does most of the heavy lifting in the common case. + * 3. mpage_readahead() does most of the heavy lifting in the common case. * 4. gfs2_block_map() is relied upon to set BH_Boundary in the right places. */ -static int gfs2_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned gfs2_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { struct inode *inode = mapping->host; struct gfs2_inode *ip = GFS2_I(inode); - struct gfs2_sbd *sdp = GFS2_SB(inode); struct gfs2_holder gh; int ret; @@ -608,13 +607,12 @@ static int gfs2_readpages(struct file *file, struct address_space *mapping, if (unlikely(ret)) goto out_uninit; if (!gfs2_is_stuffed(ip)) - ret = mpage_readpages(mapping, pages, nr_pages, gfs2_block_map); + nr_pages = mpage_readahead(mapping, start, nr_pages, + gfs2_block_map); gfs2_glock_dq(&gh); out_uninit: gfs2_holder_uninit(&gh); - if (unlikely(gfs2_withdrawn(sdp))) - ret = -EIO; - return ret; + return nr_pages; } /** @@ -828,7 +826,7 @@ static const struct address_space_operations gfs2_aops = { .writepage = gfs2_writepage, .writepages = gfs2_writepages, .readpage = gfs2_readpage, - .readpages = gfs2_readpages, + .readahead = gfs2_readahead, .bmap = gfs2_bmap, .invalidatepage = gfs2_invalidatepage, .releasepage = gfs2_releasepage, @@ -842,7 +840,7 @@ static const struct address_space_operations gfs2_jdata_aops = { .writepage = gfs2_jdata_writepage, .writepages = gfs2_jdata_writepages, .readpage = gfs2_readpage, - .readpages = gfs2_readpages, + .readahead = gfs2_readahead, .set_page_dirty = jdata_set_page_dirty, .bmap = gfs2_bmap, .invalidatepage = gfs2_invalidatepage, diff --git a/fs/hpfs/file.c b/fs/hpfs/file.c index b36abf9cb345..a0f7cc0262ae 100644 --- a/fs/hpfs/file.c +++ b/fs/hpfs/file.c @@ -125,10 +125,10 @@ static int hpfs_writepage(struct page *page, struct writeback_control *wbc) return block_write_full_page(page, hpfs_get_block, wbc); } -static int hpfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned hpfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, hpfs_get_block); + return mpage_readahead(mapping, start, nr_pages, hpfs_get_block); } static int hpfs_writepages(struct address_space *mapping, @@ -198,7 +198,7 @@ static int hpfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, const struct address_space_operations hpfs_aops = { .readpage = hpfs_readpage, .writepage = hpfs_writepage, - .readpages = hpfs_readpages, + .readahead = hpfs_readahead, .writepages = hpfs_writepages, .write_begin = hpfs_write_begin, .write_end = hpfs_write_end, diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 7c84c4c027c4..cb3511eb152a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -359,7 +359,7 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) } /* - * Just like mpage_readpages and block_read_full_page we always + * Just like mpage_readahead and block_read_full_page we always * return 0 and just mark the page as PageError on errors. This * should be cleaned up all through the stack eventually. */ diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c index 62c0462dc89f..11154cc35b16 100644 --- a/fs/isofs/inode.c +++ b/fs/isofs/inode.c @@ -1185,10 +1185,11 @@ static int isofs_readpage(struct file *file, struct page *page) return mpage_readpage(page, isofs_get_block); } -static int isofs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static +unsigned isofs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, isofs_get_block); + return mpage_readahead(mapping, start, nr_pages, isofs_get_block); } static sector_t _isofs_bmap(struct address_space *mapping, sector_t block) @@ -1198,7 +1199,7 @@ static sector_t _isofs_bmap(struct address_space *mapping, sector_t block) static const struct address_space_operations isofs_aops = { .readpage = isofs_readpage, - .readpages = isofs_readpages, + .readahead = isofs_readahead, .bmap = _isofs_bmap }; diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c index 9486afcdac76..1ed926ac2bb9 100644 --- a/fs/jfs/inode.c +++ b/fs/jfs/inode.c @@ -296,10 +296,10 @@ static int jfs_readpage(struct file *file, struct page *page) return mpage_readpage(page, jfs_get_block); } -static int jfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned jfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, jfs_get_block); + return mpage_readahead(mapping, start, nr_pages, jfs_get_block); } static void jfs_write_failed(struct address_space *mapping, loff_t to) @@ -358,7 +358,7 @@ static ssize_t jfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations jfs_aops = { .readpage = jfs_readpage, - .readpages = jfs_readpages, + .readahead = jfs_readahead, .writepage = jfs_writepage, .writepages = jfs_writepages, .write_begin = jfs_write_begin, diff --git a/fs/mpage.c b/fs/mpage.c index ccba3c4c4479..91a148bcd582 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -91,7 +91,7 @@ mpage_alloc(struct block_device *bdev, } /* - * support function for mpage_readpages. The fs supplied get_block might + * support function for mpage_readahead. The fs supplied get_block might * return an up to date buffer. This is used to map that buffer into * the page, which allows readpage to avoid triggering a duplicate call * to get_block. @@ -338,13 +338,10 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) } /** - * mpage_readpages - populate an address space with some pages & start reads against them + * mpage_readahead - start reads against pages * @mapping: the address_space - * @pages: The address of a list_head which contains the target pages. These - * pages have their ->index populated and are otherwise uninitialised. - * The page at @pages->prev has the lowest file offset, and reads should be - * issued in @pages->prev to @pages->next order. - * @nr_pages: The number of pages at *@pages + * @start: The number of the first page to read. + * @nr_pages: The number of consecutive pages to read. * @get_block: The filesystem's block mapper function. * * This function walks the pages and the blocks within each page, building and @@ -381,36 +378,27 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) * * This all causes the disk requests to be issued in the correct order. */ -int -mpage_readpages(struct address_space *mapping, struct list_head *pages, - unsigned nr_pages, get_block_t get_block) +unsigned mpage_readahead(struct address_space *mapping, pgoff_t start, + unsigned nr_pages, get_block_t get_block) { struct mpage_readpage_args args = { .get_block = get_block, .is_readahead = true, }; - unsigned page_idx; - - for (page_idx = 0; page_idx < nr_pages; page_idx++) { - struct page *page = lru_to_page(pages); + while (nr_pages--) { + struct page *page = readahead_page(mapping, start++); prefetchw(&page->flags); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, mapping, - page->index, - readahead_gfp_mask(mapping))) { - args.page = page; - args.nr_pages = nr_pages - page_idx; - args.bio = do_mpage_readpage(&args); - } + args.page = page; + args.nr_pages = nr_pages; + args.bio = do_mpage_readpage(&args); put_page(page); } - BUG_ON(!list_empty(pages)); if (args.bio) mpage_bio_submit(REQ_OP_READ, REQ_RAHEAD, args.bio); return 0; } -EXPORT_SYMBOL(mpage_readpages); +EXPORT_SYMBOL(mpage_readahead); /* * This isn't called much at all @@ -563,7 +551,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc, * Page has buffers, but they are all unmapped. The page was * created by pagein or read over a hole which was handled by * block_read_full_page(). If this address_space is also - * using mpage_readpages then this can rarely happen. + * using mpage_readahead then this can rarely happen. */ goto confused; } diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 671085512e0f..ecf543f35256 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -146,17 +146,18 @@ static int nilfs_readpage(struct file *file, struct page *page) } /** - * nilfs_readpages() - implement readpages() method of nilfs_aops {} + * nilfs_readahead() - implement readahead() method of nilfs_aops {} * address_space_operations. * @file - file struct of the file to be read * @mapping - address_space struct used for reading multiple pages - * @pages - the pages to be read + * @start - the first page to read * @nr_pages - number of pages to be read */ -static int nilfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned int nr_pages) +static +unsigned nilfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned int nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, nilfs_get_block); + return mpage_readahead(mapping, start, nr_pages, nilfs_get_block); } static int nilfs_writepages(struct address_space *mapping, @@ -308,7 +309,7 @@ const struct address_space_operations nilfs_aops = { .readpage = nilfs_readpage, .writepages = nilfs_writepages, .set_page_dirty = nilfs_set_page_dirty, - .readpages = nilfs_readpages, + .readahead = nilfs_readahead, .write_begin = nilfs_write_begin, .write_end = nilfs_write_end, /* .releasepage = nilfs_releasepage, */ diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 3a67a6518ddf..a9784a6442b7 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -350,14 +350,13 @@ static int ocfs2_readpage(struct file *file, struct page *page) * grow out to a tree. If need be, detecting boundary extents could * trivially be added in a future version of ocfs2_get_block(). */ -static int ocfs2_readpages(struct file *filp, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static +unsigned ocfs2_readahead(struct file *filp, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - int ret, err = -EIO; + int ret; struct inode *inode = mapping->host; struct ocfs2_inode_info *oi = OCFS2_I(inode); - loff_t start; - struct page *last; /* * Use the nonblocking flag for the dlm code to avoid page @@ -365,36 +364,33 @@ static int ocfs2_readpages(struct file *filp, struct address_space *mapping, */ ret = ocfs2_inode_lock_full(inode, NULL, 0, OCFS2_LOCK_NONBLOCK); if (ret) - return err; + return nr_pages; - if (down_read_trylock(&oi->ip_alloc_sem) == 0) { - ocfs2_inode_unlock(inode, 0); - return err; - } + if (down_read_trylock(&oi->ip_alloc_sem) == 0) + goto out_unlock; /* * Don't bother with inline-data. There isn't anything * to read-ahead in that case anyway... */ if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) - goto out_unlock; + goto out_up; /* * Check whether a remote node truncated this file - we just * drop out in that case as it's not worth handling here. */ - last = lru_to_page(pages); - start = (loff_t)last->index << PAGE_SHIFT; if (start >= i_size_read(inode)) - goto out_unlock; + goto out_up; - err = mpage_readpages(mapping, pages, nr_pages, ocfs2_get_block); + nr_pages = mpage_readahead(mapping, start, nr_pages, ocfs2_get_block); -out_unlock: +out_up: up_read(&oi->ip_alloc_sem); +out_unlock: ocfs2_inode_unlock(inode, 0); - return err; + return nr_pages; } /* Note: Because we don't support holes, our allocation has @@ -2474,7 +2470,7 @@ static ssize_t ocfs2_direct_IO(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations ocfs2_aops = { .readpage = ocfs2_readpage, - .readpages = ocfs2_readpages, + .readahead = ocfs2_readahead, .writepage = ocfs2_writepage, .write_begin = ocfs2_write_begin, .write_end = ocfs2_write_end, diff --git a/fs/omfs/file.c b/fs/omfs/file.c index d640b9388238..e7392f49f619 100644 --- a/fs/omfs/file.c +++ b/fs/omfs/file.c @@ -289,10 +289,10 @@ static int omfs_readpage(struct file *file, struct page *page) return block_read_full_page(page, omfs_get_block); } -static int omfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned omfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, omfs_get_block); + return mpage_readahead(mapping, start, nr_pages, omfs_get_block); } static int omfs_writepage(struct page *page, struct writeback_control *wbc) @@ -373,7 +373,7 @@ const struct inode_operations omfs_file_inops = { const struct address_space_operations omfs_aops = { .readpage = omfs_readpage, - .readpages = omfs_readpages, + .readahead = omfs_readahead, .writepage = omfs_writepage, .writepages = omfs_writepages, .write_begin = omfs_write_begin, diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c index 345db56c98fd..949e823a1d30 100644 --- a/fs/qnx6/inode.c +++ b/fs/qnx6/inode.c @@ -99,10 +99,10 @@ static int qnx6_readpage(struct file *file, struct page *page) return mpage_readpage(page, qnx6_get_block); } -static int qnx6_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned qnx6_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, qnx6_get_block); + return mpage_readahead(mapping, start, nr_pages, qnx6_get_block); } /* @@ -499,7 +499,7 @@ static sector_t qnx6_bmap(struct address_space *mapping, sector_t block) } static const struct address_space_operations qnx6_aops = { .readpage = qnx6_readpage, - .readpages = qnx6_readpages, + .readahead = qnx6_readahead, .bmap = qnx6_bmap }; diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 6419e6dacc39..0f2666ef23dd 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -1160,11 +1160,11 @@ int reiserfs_get_block(struct inode *inode, sector_t block, return retval; } -static int -reiserfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned +reiserfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, reiserfs_get_block); + return mpage_readahead(mapping, start, nr_pages, reiserfs_get_block); } /* @@ -3434,7 +3434,7 @@ int reiserfs_setattr(struct dentry *dentry, struct iattr *attr) const struct address_space_operations reiserfs_address_space_operations = { .writepage = reiserfs_writepage, .readpage = reiserfs_readpage, - .readpages = reiserfs_readpages, + .readahead = reiserfs_readahead, .releasepage = reiserfs_releasepage, .invalidatepage = reiserfs_invalidatepage, .write_begin = reiserfs_write_begin, diff --git a/fs/udf/inode.c b/fs/udf/inode.c index e875bc5668ee..97c9bccf1be4 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -195,10 +195,10 @@ static int udf_readpage(struct file *file, struct page *page) return mpage_readpage(page, udf_get_block); } -static int udf_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned udf_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return mpage_readpages(mapping, pages, nr_pages, udf_get_block); + return mpage_readahead(mapping, start, nr_pages, udf_get_block); } static int udf_write_begin(struct file *file, struct address_space *mapping, @@ -234,7 +234,7 @@ static sector_t udf_bmap(struct address_space *mapping, sector_t block) const struct address_space_operations udf_aops = { .readpage = udf_readpage, - .readpages = udf_readpages, + .readahead = udf_readahead, .writepage = udf_writepage, .writepages = udf_writepages, .write_begin = udf_write_begin, diff --git a/include/linux/mpage.h b/include/linux/mpage.h index 001f1fcf9836..dabf7b5a6a28 100644 --- a/include/linux/mpage.h +++ b/include/linux/mpage.h @@ -14,7 +14,7 @@ struct writeback_control; -int mpage_readpages(struct address_space *mapping, struct list_head *pages, +unsigned mpage_readahead(struct address_space *mapping, pgoff_t start, unsigned nr_pages, get_block_t get_block); int mpage_readpage(struct page *page, get_block_t get_block); int mpage_writepages(struct address_space *mapping, diff --git a/mm/migrate.c b/mm/migrate.c index edf42ed90030..860925dd2725 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1020,7 +1020,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * to the LRU. Later, when the IO completes the pages are * marked uptodate and unlocked. However, the queueing * could be merging multiple pages for one bio (e.g. - * mpage_readpages). If an allocation happens for the + * mpage_readahead). If an allocation happens for the * second or third page, the process can end up locking * the same page twice and deadlocking. Rather than * trying to be clever about what pages can be locked, From patchwork Sat Feb 1 15:12:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361101 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 236E7139A for ; Sat, 1 Feb 2020 15:13:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB1DC2073D for ; Sat, 1 Feb 2020 15:13:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="MBrxoQXt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB1DC2073D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25EDE6B0605; Sat, 1 Feb 2020 10:12:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1EB796B0606; Sat, 1 Feb 2020 10:12:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B0AB6B0608; Sat, 1 Feb 2020 10:12:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id E84206B0605 for ; Sat, 1 Feb 2020 10:12:49 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 996728248047 for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) X-FDA: 76441900458.02.cook91_8d0a21e8b6e0a X-Spam-Summary: 2,0,0,26ad06cf60920524,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-btrfs@vger.kernel.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3872:3874:4117:4321:4423:4605:5007:6261:6653:7576:7903:8531:8660:9592:10004:11026:11658:11914:12043:12296:12297:12438:12555:12895:12986:13148:13230:13894:14096:14181:14394:14721:21080:21324:21433:21451:21627:21987:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: cook91_8d0a21e8b6e0a X-Filterd-Recvd-Size: 6646 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=AsVOh91gi/2Ld/EfqBNlMyY/ZrCMfPz2+/lz/gfCdpw=; b=MBrxoQXtUvy8vWHQ3XAweprjhV tiYtG3J6A6nnhKyc9Z+LC9yr/9LTS1M7W6UuRWcP250YINSxrdnppzWLvnS64GmblAIYmr3GHoyKa cS18kXvn2dq8Dn6ZmNxSay0CVSfYAo/NuwsUzkOjNpGhYPQT6+6C2OXa01PFnp8WeEce/ndx837Sq w2JmNkZLCnnhvzVhe3bU9W0seTvgZ+OenrncJ1GqCYhvFSYxnmo4HviXXgV86zMKgL83odgIKwARF mCSyXZrK7AHtA57CwUWvEpo7xmwlfy43RwcqC0Z9HOWSQBDprLMpEWDd6qYm2ceJmXSgpHGeSELp+ lFE1U3Zg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006Hb-8R; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: [PATCH v4 06/12] btrfs: Convert from readpages to readahead Date: Sat, 1 Feb 2020 07:12:34 -0800 Message-Id: <20200201151240.24082-7-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in btrfs Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-btrfs@vger.kernel.org --- fs/btrfs/extent_io.c | 19 +++++++------------ fs/btrfs/extent_io.h | 2 +- fs/btrfs/inode.c | 18 +++++++++--------- 3 files changed, 17 insertions(+), 22 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e2d30287e2d5..18b1fbfdcab2 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4269,7 +4269,7 @@ int extent_writepages(struct address_space *mapping, return ret; } -int extent_readpages(struct address_space *mapping, struct list_head *pages, +unsigned extent_readahead(struct address_space *mapping, pgoff_t start, unsigned nr_pages) { struct bio *bio = NULL; @@ -4280,22 +4280,17 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages, int nr = 0; u64 prev_em_start = (u64)-1; - while (!list_empty(pages)) { + while (nr_pages) { u64 contig_end = 0; - for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) { - struct page *page = lru_to_page(pages); + for (nr = 0; nr < ARRAY_SIZE(pagepool); nr++) { + struct page *page = readahead_page(mapping, start++); prefetchw(&page->flags); - list_del(&page->lru); - if (add_to_page_cache_lru(page, mapping, page->index, - readahead_gfp_mask(mapping))) { - put_page(page); - break; - } - - pagepool[nr++] = page; + pagepool[nr] = page; contig_end = page_offset(page) + PAGE_SIZE - 1; + if (--nr_pages == 0) + break; } if (nr) { diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 5d205bbaafdc..4fd9dc05592b 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -198,7 +198,7 @@ int extent_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, struct writeback_control *wbc); -int extent_readpages(struct address_space *mapping, struct list_head *pages, +unsigned extent_readahead(struct address_space *mapping, pgoff_t start, unsigned nr_pages); int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, __u64 start, __u64 len); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 6d2bb58d277a..7622918d7624 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4723,8 +4723,8 @@ static void evict_inode_truncate_pages(struct inode *inode) /* * Keep looping until we have no more ranges in the io tree. - * We can have ongoing bios started by readpages (called from readahead) - * that have their endio callback (extent_io.c:end_bio_extent_readpage) + * We can have ongoing bios started by readahead that have + * their endio callback (extent_io.c:end_bio_extent_readpage) * still in progress (unlocked the pages in the bio but did not yet * unlocked the ranges in the io tree). Therefore this means some * ranges can still be locked and eviction started because before @@ -6925,11 +6925,11 @@ static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend, * for it to complete) and then invalidate the pages for * this range (through invalidate_inode_pages2_range()), * but that can lead us to a deadlock with a concurrent - * call to readpages() (a buffered read or a defrag call + * call to readahead (a buffered read or a defrag call * triggered a readahead) on a page lock due to an * ordered dio extent we created before but did not have * yet a corresponding bio submitted (whence it can not - * complete), which makes readpages() wait for that + * complete), which makes readahead wait for that * ordered extent to complete while holding a lock on * that page. */ @@ -8168,11 +8168,11 @@ static int btrfs_writepages(struct address_space *mapping, return extent_writepages(mapping, wbc); } -static int -btrfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned +btrfs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { - return extent_readpages(mapping, pages, nr_pages); + return extent_readahead(mapping, start, nr_pages); } static int __btrfs_releasepage(struct page *page, gfp_t gfp_flags) @@ -10377,7 +10377,7 @@ static const struct address_space_operations btrfs_aops = { .readpage = btrfs_readpage, .writepage = btrfs_writepage, .writepages = btrfs_writepages, - .readpages = btrfs_readpages, + .readahead = btrfs_readahead, .direct_IO = btrfs_direct_IO, .invalidatepage = btrfs_invalidatepage, .releasepage = btrfs_releasepage, From patchwork Sat Feb 1 15:12:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361071 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 462D913A4 for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1383720661 for ; Sat, 1 Feb 2020 15:12:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DcG4wxUB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1383720661 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BD4846B05FD; Sat, 1 Feb 2020 10:12:45 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BACC26B05FE; Sat, 1 Feb 2020 10:12:45 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC1E96B05FF; Sat, 1 Feb 2020 10:12:45 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id 97E686B05FD for ; Sat, 1 Feb 2020 10:12:45 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4E89337F1 for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) X-FDA: 76441900290.30.star31_8c6c8bd0fc73b X-Spam-Summary: 2,0,0,29e5107c84ff7122,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-erofs@lists.ozlabs.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2901:3138:3139:3140:3141:3142:3354:3865:3866:3868:3870:4321:4605:5007:6119:6261:6653:7576:7903:10004:11026:11232:11233:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13894:14096:14110:14181:14394:14721:21080:21451:21627:21990:30054:30056,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: star31_8c6c8bd0fc73b X-Filterd-Recvd-Size: 5897 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=vx0x9ixSyk5FO3ZgNxdf5gKeGpRW9UmfbVaEbgzrEAE=; b=DcG4wxUBBQHlTnlL3fvdg8cFjw 5Dv1AcTr/YNgWG3aykyBm8YlcFdQcXQYNTnXgoM2csLfzDKG1gefpRznvkmRZ7Xb/rh9LekVoGw0X QL/RZqyLUcjGkwrMmk/QTUwHQ6kZHBRfOzWn6bcmkQoQI/eA5BLvp7jPAtPwYVfsd9IuNWOypqgLO SSuvx0+4N5CJHDAiVxksTVPEXzTmApNEcZ3rTEGEGX573BxVUkhgTTqHY+lK47OW1M5ia+QvKnNCF E44xSuq4cbUqaIkZ6qN+5jjVzdrdmAcrVdjm3uiHOCc8lYwLsOjYLCPqX5TTtbK89dtTZIwK4QMbc DRVBZBJg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006Hf-9S; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-erofs@lists.ozlabs.org Subject: [PATCH v4 07/12] erofs: Convert uncompressed files from readpages to readahead Date: Sat, 1 Feb 2020 07:12:35 -0800 Message-Id: <20200201151240.24082-8-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in erofs Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-erofs@lists.ozlabs.org --- fs/erofs/data.c | 33 +++++++++++++-------------------- fs/erofs/zdata.c | 2 +- include/trace/events/erofs.h | 6 +++--- 3 files changed, 17 insertions(+), 24 deletions(-) diff --git a/fs/erofs/data.c b/fs/erofs/data.c index fc3a8d8064f8..514b43463a21 100644 --- a/fs/erofs/data.c +++ b/fs/erofs/data.c @@ -280,42 +280,35 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page) return 0; } -static int erofs_raw_access_readpages(struct file *filp, +static unsigned erofs_raw_access_readahead(struct file *file, struct address_space *mapping, - struct list_head *pages, + pgoff_t start, unsigned int nr_pages) { erofs_off_t last_block; struct bio *bio = NULL; - gfp_t gfp = readahead_gfp_mask(mapping); - struct page *page = list_last_entry(pages, struct page, lru); - trace_erofs_readpages(mapping->host, page, nr_pages, true); + trace_erofs_readpages(mapping->host, start, nr_pages, true); for (; nr_pages; --nr_pages) { - page = list_entry(pages->prev, struct page, lru); + struct page *page = readahead_page(mapping, start++); prefetchw(&page->flags); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) { - bio = erofs_read_raw_page(bio, mapping, page, - &last_block, nr_pages, true); + bio = erofs_read_raw_page(bio, mapping, page, &last_block, + nr_pages, true); - /* all the page errors are ignored when readahead */ - if (IS_ERR(bio)) { - pr_err("%s, readahead error at page %lu of nid %llu\n", - __func__, page->index, - EROFS_I(mapping->host)->nid); + /* all the page errors are ignored when readahead */ + if (IS_ERR(bio)) { + pr_err("%s, readahead error at page %lu of nid %llu\n", + __func__, page->index, + EROFS_I(mapping->host)->nid); - bio = NULL; - } + bio = NULL; } - /* pages could still be locked */ put_page(page); } - DBG_BUGON(!list_empty(pages)); /* the rare case (end in gaps) */ if (bio) @@ -358,7 +351,7 @@ static sector_t erofs_bmap(struct address_space *mapping, sector_t block) /* for uncompressed (aligned) files and raw access for other files */ const struct address_space_operations erofs_raw_access_aops = { .readpage = erofs_raw_access_readpage, - .readpages = erofs_raw_access_readpages, + .readahead = erofs_raw_access_readahead, .bmap = erofs_bmap, }; diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 80e47f07d946..17f45fcb8c5c 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -1315,7 +1315,7 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping, struct page *head = NULL; LIST_HEAD(pagepool); - trace_erofs_readpages(mapping->host, lru_to_page(pages), + trace_erofs_readpages(mapping->host, lru_to_page(pages)->index, nr_pages, false); f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT; diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h index 27f5caa6299a..bf9806fd1306 100644 --- a/include/trace/events/erofs.h +++ b/include/trace/events/erofs.h @@ -113,10 +113,10 @@ TRACE_EVENT(erofs_readpage, TRACE_EVENT(erofs_readpages, - TP_PROTO(struct inode *inode, struct page *page, unsigned int nrpage, + TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage, bool raw), - TP_ARGS(inode, page, nrpage, raw), + TP_ARGS(inode, start, nrpage, raw), TP_STRUCT__entry( __field(dev_t, dev ) @@ -129,7 +129,7 @@ TRACE_EVENT(erofs_readpages, TP_fast_assign( __entry->dev = inode->i_sb->s_dev; __entry->nid = EROFS_I(inode)->nid; - __entry->start = page->index; + __entry->start = start; __entry->nrpage = nrpage; __entry->raw = raw; ), From patchwork Sat Feb 1 15:12:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361091 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3EF3214B4 for ; Sat, 1 Feb 2020 15:13:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 021E320740 for ; Sat, 1 Feb 2020 15:13:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RPW5IKgT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 021E320740 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 741136B0600; Sat, 1 Feb 2020 10:12:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 718DF6B0603; Sat, 1 Feb 2020 10:12:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62EEC6B0604; Sat, 1 Feb 2020 10:12:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 4C0B66B0600 for ; Sat, 1 Feb 2020 10:12:47 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F289B181AC9BF for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) X-FDA: 76441900332.09.tax64_8cac0fb00665a X-Spam-Summary: 2,0,0,0dd4effb2402b7b9,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-erofs@lists.ozlabs.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2393:2559:2562:2898:3138:3139:3140:3141:3142:3353:3865:3870:3871:4321:4605:5007:6117:6119:6261:6653:7576:7903:7904:8957:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12895:12986:13894:14096:14181:14394:14721:21080:21451:21627:21990:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:31,LUA_SUMMARY:none X-HE-Tag: tax64_8cac0fb00665a X-Filterd-Recvd-Size: 4215 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=UkBshjAQJen5jHdROGBo/aU8wbY++CB8G+5V1RNYHfU=; b=RPW5IKgTMNWdYSAlZgy4LUU71r aYyWiNB5xrlWmibL+ScnNZ0HByZjSdeNPBXKt0iu33kzEGwgJfdTY+sW2HHvMuWPkjTeceZUE5xem 5O5zEpVu089HQzOMj26GM1ywiMxqflGvCnjNV5iSA61pN1lfS/ml5CNLKI6WftAQLb59ofI/XqlaR wEt2HmdoOIyiBXs1wNQmZYHCPFcYaX/4X0d9yEpIPSBCsUpRjx67YD43S3xDT5Vm8KSJrdx98hF30 K2ojM85dgYgWsVTtpqmdEVMbcoRk+463pd2fskI39ZOvX9gLPkmQzhF5vvQ1/TykgxSMTnDNrJ12v yKd9Av9w==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006Hj-AR; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-erofs@lists.ozlabs.org Subject: [PATCH v4 08/12] erofs: Convert compressed files from readpages to readahead Date: Sat, 1 Feb 2020 07:12:36 -0800 Message-Id: <20200201151240.24082-9-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in erofs. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-erofs@lists.ozlabs.org --- fs/erofs/zdata.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 17f45fcb8c5c..97c05200a784 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -1303,28 +1303,26 @@ static bool should_decompress_synchronously(struct erofs_sb_info *sbi, return nr <= sbi->max_sync_decompress_pages; } -static int z_erofs_readpages(struct file *filp, struct address_space *mapping, - struct list_head *pages, unsigned int nr_pages) +static +unsigned z_erofs_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned int nr_pages) { struct inode *const inode = mapping->host; struct erofs_sb_info *const sbi = EROFS_I_SB(inode); bool sync = should_decompress_synchronously(sbi, nr_pages); struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode); - gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL); struct page *head = NULL; LIST_HEAD(pagepool); - trace_erofs_readpages(mapping->host, lru_to_page(pages)->index, - nr_pages, false); + trace_erofs_readpages(inode, start, nr_pages, false); - f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT; + f.headoffset = (erofs_off_t)start << PAGE_SHIFT; for (; nr_pages; --nr_pages) { - struct page *page = lru_to_page(pages); + struct page *page = readahead_page(mapping, start); prefetchw(&page->flags); - list_del(&page->lru); /* * A pure asynchronous readahead is indicated if @@ -1333,11 +1331,6 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping, */ sync &= !(PageReadahead(page) && !head); - if (add_to_page_cache_lru(page, mapping, page->index, gfp)) { - list_add(&page->lru, &pagepool); - continue; - } - set_page_private(page, (unsigned long)head); head = page; } @@ -1371,6 +1364,6 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping, const struct address_space_operations z_erofs_aops = { .readpage = z_erofs_readpage, - .readpages = z_erofs_readpages, + .readahead = z_erofs_readahead, }; From patchwork Sat Feb 1 15:12:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEB7213A4 for ; Sat, 1 Feb 2020 15:13:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E93D2073D for ; Sat, 1 Feb 2020 15:13:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="T7D+lG5G" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E93D2073D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C80E6B0609; Sat, 1 Feb 2020 10:12:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 878056B060A; Sat, 1 Feb 2020 10:12:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B7866B060B; Sat, 1 Feb 2020 10:12:51 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 65B076B0609 for ; Sat, 1 Feb 2020 10:12:51 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 32C9C2827 for ; Sat, 1 Feb 2020 15:12:51 +0000 (UTC) X-FDA: 76441900542.02.can66_8d42ff05a3d0e X-Spam-Summary: 2,0,0,e8fe627c5e32e32c,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-ext4@vger.kernel.org,RULES_HIT:2:41:69:355:379:541:800:960:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3867:3870:3871:3872:4049:4119:4321:4605:5007:6119:6261:6653:7576:7903:10004:11026:11232:11658:11914:12043:12295:12296:12297:12438:12555:12895:13894:14096:14394:21080:21451:21627:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:35,LUA_SUMMARY:none X-HE-Tag: can66_8d42ff05a3d0e X-Filterd-Recvd-Size: 8651 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=xYwAfdpS6hrmEl94Puvsi7IdImGde6rBNX47LcdKaVk=; b=T7D+lG5GLXYKWTZpicm/6rP+Dy BGGfBYqgDqdVsPkeCgloGKm21LWX+OmYBNOuyoiHCkn1URRQf8+LBBlAbw3wQKVNzO+KnaGLoCTd5 C4TVclnA99vj0PbiDQury7MRKSUlP+bQbw5Nneu8mrKSR5n+YJC41TL+wu+9o0E9+PkjND8fwWKG2 GuTsHnkhEVKV8iN461UDqtyqhf9SSmoo2tv43fI5/lNjdRFyz1aHpAFHcGL6AXQRR/OtB6laaZgCV tqEuPiQY+sQ/vikq277LnDGO+lU0GyIUDT3qh8h064exlNntBmhPVa4CMd0NJHIMIkKe6gDrilX7u evbguPQg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006Ht-Bn; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Subject: [PATCH v4 09/12] ext4: Convert from readpages to readahead Date: Sat, 1 Feb 2020 07:12:37 -0800 Message-Id: <20200201151240.24082-10-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in ext4 Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ext4@vger.kernel.org --- fs/ext4/ext4.h | 5 ++--- fs/ext4/inode.c | 24 ++++++++++++------------ fs/ext4/readpage.c | 20 +++++++------------- fs/ext4/verity.c | 16 +++++++++++----- 4 files changed, 32 insertions(+), 33 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 9a2ee2428ecc..346ad9e1403b 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3275,9 +3275,8 @@ static inline void ext4_set_de_type(struct super_block *sb, } /* readpages.c */ -extern int ext4_mpage_readpages(struct address_space *mapping, - struct list_head *pages, struct page *page, - unsigned nr_pages, bool is_readahead); +extern int ext4_mpage_readpages(struct address_space *mapping, pgoff_t start, + struct page *page, unsigned nr_pages, bool is_readahead); extern int __init ext4_init_post_read_processing(void); extern void ext4_exit_post_read_processing(void); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 3313168b680f..e1f5864e5ecf 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3218,7 +3218,7 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block) static int ext4_readpage(struct file *file, struct page *page) { int ret = -EAGAIN; - struct inode *inode = page->mapping->host; + struct inode *inode = file_inode(file); trace_ext4_readpage(page); @@ -3226,23 +3226,23 @@ static int ext4_readpage(struct file *file, struct page *page) ret = ext4_readpage_inline(inode, page); if (ret == -EAGAIN) - return ext4_mpage_readpages(page->mapping, NULL, page, 1, - false); + return ext4_mpage_readpages(page->mapping, 0, page, 1, false); return ret; } -static int -ext4_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned +ext4_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { struct inode *inode = mapping->host; - /* If the file has inline data, no need to do readpages. */ + /* If the file has inline data, no need to do readahead. */ if (ext4_has_inline_data(inode)) - return 0; + return nr_pages; - return ext4_mpage_readpages(mapping, pages, NULL, nr_pages, true); + ext4_mpage_readpages(mapping, start, NULL, nr_pages, true); + return 0; } static void ext4_invalidatepage(struct page *page, unsigned int offset, @@ -3587,7 +3587,7 @@ static int ext4_set_page_dirty(struct page *page) static const struct address_space_operations ext4_aops = { .readpage = ext4_readpage, - .readpages = ext4_readpages, + .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, .write_begin = ext4_write_begin, @@ -3604,7 +3604,7 @@ static const struct address_space_operations ext4_aops = { static const struct address_space_operations ext4_journalled_aops = { .readpage = ext4_readpage, - .readpages = ext4_readpages, + .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, .write_begin = ext4_write_begin, @@ -3620,7 +3620,7 @@ static const struct address_space_operations ext4_journalled_aops = { static const struct address_space_operations ext4_da_aops = { .readpage = ext4_readpage, - .readpages = ext4_readpages, + .readahead = ext4_readahead, .writepage = ext4_writepage, .writepages = ext4_writepages, .write_begin = ext4_da_write_begin, diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index c1769afbf799..aaeb298c8fdb 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -7,8 +7,8 @@ * * This was originally taken from fs/mpage.c * - * The intent is the ext4_mpage_readpages() function here is intended - * to replace mpage_readpages() in the general case, not just for + * The ext4_mpage_readahead() function here is intended to + * replace mpage_readahead() in the general case, not just for * encrypted files. It has some limitations (see below), where it * will fall back to read_block_full_page(), but these limitations * should only be hit when page_size != block_size. @@ -221,9 +221,8 @@ static inline loff_t ext4_readpage_limit(struct inode *inode) return i_size_read(inode); } -int ext4_mpage_readpages(struct address_space *mapping, - struct list_head *pages, struct page *page, - unsigned nr_pages, bool is_readahead) +int ext4_mpage_readpages(struct address_space *mapping, pgoff_t start, + struct page *page, unsigned nr_pages, bool is_readahead) { struct bio *bio = NULL; sector_t last_block_in_bio = 0; @@ -251,14 +250,10 @@ int ext4_mpage_readpages(struct address_space *mapping, int fully_mapped = 1; unsigned first_hole = blocks_per_page; - if (pages) { - page = lru_to_page(pages); + if (is_readahead) { + page = readahead_page(mapping, start++); prefetchw(&page->flags); - list_del(&page->lru); - if (add_to_page_cache_lru(page, mapping, page->index, - readahead_gfp_mask(mapping))) - goto next_page; } if (page_has_buffers(page)) @@ -406,10 +401,9 @@ int ext4_mpage_readpages(struct address_space *mapping, else unlock_page(page); next_page: - if (pages) + if (is_readahead) put_page(page); } - BUG_ON(pages && !list_empty(pages)); if (bio) submit_bio(bio); return 0; diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c index dc5ec724d889..40a550c0da2b 100644 --- a/fs/ext4/verity.c +++ b/fs/ext4/verity.c @@ -351,7 +351,6 @@ static int ext4_get_verity_descriptor(struct inode *inode, void *buf, static void ext4_merkle_tree_readahead(struct address_space *mapping, pgoff_t start_index, unsigned long count) { - LIST_HEAD(pages); unsigned int nr_pages = 0; struct page *page; pgoff_t index; @@ -360,16 +359,23 @@ static void ext4_merkle_tree_readahead(struct address_space *mapping, for (index = start_index; index < start_index + count; index++) { page = xa_load(&mapping->i_pages, index); if (!page || xa_is_value(page)) { - page = __page_cache_alloc(readahead_gfp_mask(mapping)); + gfp_t gfp = readahead_gfp_mask(mapping); + page = __page_cache_alloc(gfp); if (!page) break; - page->index = index; - list_add(&page->lru, &pages); + if (add_to_page_cache_lru(page, mapping, index, gfp)) { + put_page(page); + break; + } nr_pages++; } } + + if (!nr_pages) + return; + blk_start_plug(&plug); - ext4_mpage_readpages(mapping, &pages, NULL, nr_pages, true); + ext4_mpage_readpages(mapping, start_index, NULL, nr_pages, true); blk_finish_plug(&plug); } From patchwork Sat Feb 1 15:12:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361161 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A055A138D for ; Sat, 1 Feb 2020 15:31:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F48F2070C for ; Sat, 1 Feb 2020 15:31:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="FlVyy6qa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F48F2070C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8917F6B0614; Sat, 1 Feb 2020 10:31:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 842C56B0615; Sat, 1 Feb 2020 10:31:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 758CC6B0616; Sat, 1 Feb 2020 10:31:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 611186B0614 for ; Sat, 1 Feb 2020 10:31:44 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 24F0B181AEF00 for ; Sat, 1 Feb 2020 15:31:44 +0000 (UTC) X-FDA: 76441948128.23.goat90_f15576b14a5a X-Spam-Summary: 2,0,0,04a2074cd262ca13,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-f2fs-devel@lists.sourceforge.net,RULES_HIT:2:41:69:355:379:541:800:960:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3870:4049:4119:4321:4605:5007:6119:6261:6653:7576:7903:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13894:14096:14110:14394:21080:21451:21627:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:30,LUA_SUMMARY:none X-HE-Tag: goat90_f15576b14a5a X-Filterd-Recvd-Size: 8476 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:31:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=5KdHAUiS686jTpU0UKHCIX4BueXn8cIx7wjJa85X5I0=; b=FlVyy6qaG4IujDNRCtF80f9ySl u5mmlE5tsMA75R2R/eYDZpmR8IeeooX5NSL6xAXJCk/H+/5OJn0NPzsNOZ5WkFGxat2yok9CazbY8 1pOlT/2Ai0gOBLMnUF2xrNhDj9Pe55GWzzCiOqRfpc0eJXqS4+AnHxwDAZRaVO0ztF1uHWigPOp+l HXRKlCW9qU5wh3naAfOyhIlT9u7dCFkFf25j9ym/UMn5RYq51PhUzSsXbXfJaruowz2NAeBqJPszW 6VD6NliwDsjdULcKj4Y7T1NoMA+wy+jqvmmpBc5Ntaj6sZ2/RrVnZhTyZlszd0Okkg9ijvLICcDq7 t5xACMnQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006Hw-Cn; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v4 10/12] f2fs: Convert from readpages to readahead Date: Sat, 1 Feb 2020 07:12:38 -0800 Message-Id: <20200201151240.24082-11-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in f2fs Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-f2fs-devel@lists.sourceforge.net --- fs/f2fs/data.c | 35 ++++++++++++++--------------------- fs/f2fs/f2fs.h | 5 ++--- fs/f2fs/verity.c | 16 +++++++++++----- include/trace/events/f2fs.h | 6 +++--- 4 files changed, 30 insertions(+), 32 deletions(-) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 8bd9afa81c54..80803f8b1b40 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2159,9 +2159,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, * use ->readpage() or do the necessary surgery to decouple ->readpages() * from read-ahead. */ -int f2fs_mpage_readpages(struct address_space *mapping, - struct list_head *pages, struct page *page, - unsigned nr_pages, bool is_readahead) +int f2fs_mpage_readpages(struct address_space *mapping, pgoff_t start, + struct page *page, unsigned nr_pages, bool is_readahead) { struct bio *bio = NULL; sector_t last_block_in_bio = 0; @@ -2192,15 +2191,10 @@ int f2fs_mpage_readpages(struct address_space *mapping, map.m_may_create = false; for (; nr_pages; nr_pages--) { - if (pages) { - page = list_last_entry(pages, struct page, lru); + if (is_readahead) { + page = readahead_page(mapping, start++); prefetchw(&page->flags); - list_del(&page->lru); - if (add_to_page_cache_lru(page, mapping, - page_index(page), - readahead_gfp_mask(mapping))) - goto next_page; } #ifdef CONFIG_F2FS_FS_COMPRESSION @@ -2243,7 +2237,7 @@ int f2fs_mpage_readpages(struct address_space *mapping, unlock_page(page); } next_page: - if (pages) + if (is_readahead) put_page(page); #ifdef CONFIG_F2FS_FS_COMPRESSION @@ -2259,10 +2253,9 @@ int f2fs_mpage_readpages(struct address_space *mapping, } #endif } - BUG_ON(pages && !list_empty(pages)); if (bio) __submit_bio(F2FS_I_SB(inode), bio, DATA); - return pages ? 0 : ret; + return ret; } static int f2fs_read_data_page(struct file *file, struct page *page) @@ -2282,27 +2275,27 @@ static int f2fs_read_data_page(struct file *file, struct page *page) ret = f2fs_read_inline_data(inode, page); if (ret == -EAGAIN) ret = f2fs_mpage_readpages(page_file_mapping(page), - NULL, page, 1, false); + 0, page, 1, false); return ret; } -static int f2fs_read_data_pages(struct file *file, +static unsigned f2fs_readahead(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) + pgoff_t start, unsigned nr_pages) { struct inode *inode = mapping->host; - struct page *page = list_last_entry(pages, struct page, lru); - trace_f2fs_readpages(inode, page, nr_pages); + trace_f2fs_readpages(inode, start, nr_pages); if (!f2fs_is_compress_backend_ready(inode)) return 0; /* If the file has inline data, skip readpages */ if (f2fs_has_inline_data(inode)) - return 0; + return nr_pages; - return f2fs_mpage_readpages(mapping, pages, NULL, nr_pages, true); + f2fs_mpage_readpages(mapping, start, NULL, nr_pages, true); + return 0; } int f2fs_encrypt_one_page(struct f2fs_io_info *fio) @@ -3778,7 +3771,7 @@ static void f2fs_swap_deactivate(struct file *file) const struct address_space_operations f2fs_dblock_aops = { .readpage = f2fs_read_data_page, - .readpages = f2fs_read_data_pages, + .readahead = f2fs_readahead, .writepage = f2fs_write_data_page, .writepages = f2fs_write_data_pages, .write_begin = f2fs_write_begin, diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 5355be6b6755..db00907f90f1 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3344,9 +3344,8 @@ int f2fs_reserve_new_block(struct dnode_of_data *dn); int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index); int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from); int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index); -int f2fs_mpage_readpages(struct address_space *mapping, - struct list_head *pages, struct page *page, - unsigned nr_pages, bool is_readahead); +int f2fs_mpage_readpages(struct address_space *mapping, pgoff_t start, + struct page *page, unsigned nr_pages, bool is_readahead); struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, int op_flags, bool for_write); struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index); diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c index d7d430a6f130..71e92b9b3aa6 100644 --- a/fs/f2fs/verity.c +++ b/fs/f2fs/verity.c @@ -231,7 +231,6 @@ static int f2fs_get_verity_descriptor(struct inode *inode, void *buf, static void f2fs_merkle_tree_readahead(struct address_space *mapping, pgoff_t start_index, unsigned long count) { - LIST_HEAD(pages); unsigned int nr_pages = 0; struct page *page; pgoff_t index; @@ -240,16 +239,23 @@ static void f2fs_merkle_tree_readahead(struct address_space *mapping, for (index = start_index; index < start_index + count; index++) { page = xa_load(&mapping->i_pages, index); if (!page || xa_is_value(page)) { - page = __page_cache_alloc(readahead_gfp_mask(mapping)); + gfp_t gfp = readahead_gfp_mask(mapping); + page = __page_cache_alloc(gfp); if (!page) break; - page->index = index; - list_add(&page->lru, &pages); + if (add_to_page_cache_lru(page, mapping, index, gfp)) { + put_page(page); + break; + } nr_pages++; } } + + if (!nr_pages) + return; + blk_start_plug(&plug); - f2fs_mpage_readpages(mapping, &pages, NULL, nr_pages, true); + f2fs_mpage_readpages(mapping, start_index, NULL, nr_pages, true); blk_finish_plug(&plug); } diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h index 67a97838c2a0..d72da4a33883 100644 --- a/include/trace/events/f2fs.h +++ b/include/trace/events/f2fs.h @@ -1375,9 +1375,9 @@ TRACE_EVENT(f2fs_writepages, TRACE_EVENT(f2fs_readpages, - TP_PROTO(struct inode *inode, struct page *page, unsigned int nrpage), + TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage), - TP_ARGS(inode, page, nrpage), + TP_ARGS(inode, start, nrpage), TP_STRUCT__entry( __field(dev_t, dev) @@ -1389,7 +1389,7 @@ TRACE_EVENT(f2fs_readpages, TP_fast_assign( __entry->dev = inode->i_sb->s_dev; __entry->ino = inode->i_ino; - __entry->start = page->index; + __entry->start = start; __entry->nrpage = nrpage; ), From patchwork Sat Feb 1 15:12:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361089 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 927D313A4 for ; Sat, 1 Feb 2020 15:13:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5564B20740 for ; Sat, 1 Feb 2020 15:13:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="U/60ZvEX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5564B20740 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12F326B0601; Sat, 1 Feb 2020 10:12:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E1E036B0600; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C212C6B0604; Sat, 1 Feb 2020 10:12:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id ABDD16B0600 for ; Sat, 1 Feb 2020 10:12:46 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6A49640EE for ; Sat, 1 Feb 2020 15:12:46 +0000 (UTC) X-FDA: 76441900332.18.war94_8c9069f8f893c X-Spam-Summary: 2,0,0,ed10d9ba876b6cda,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3870:4321:4385:5007:6261:6653:7576:8660:10004:11026:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13148:13230:13894:14096:14110:14181:14394:14721:21080:21451:21627:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: war94_8c9069f8f893c X-Filterd-Recvd-Size: 5211 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=KDUOzl0MVOTcqaG0+tJFJVgrha2fmSoI/N94AdkJq5c=; b=U/60ZvEXY+pLVht5o2+SfdHqov 4paaOeykETdvtFfhbUtphGL37w3zZ2zybA8uAZnCJRKuo4HkDGZt/7CFUVbBp8uXHRP9HuWOKXz6I SfWuNMo31G1PyVvL3u7myJxpeuCklzZSYLlBXEtOnC1irHK+VgGZtsp2Enkf12+mpTi8X5qwD0Osn VEEpzQL57iBtmRTkCuMHFQYRrJLP0i6eTGjtM+8KawYfAKYFGf2J+p8ZWaVNE1yNmJq0xP944EzNC RZtyOJdk4edDTid0C8yvvJkywiCv98bJLwuUS8D881F9gQfRuA+qRbs4eIwTodWapOR8Elds15tld qG/KqF7w==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006I3-Dv; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 11/12] fuse: Convert from readpages to readahead Date: Sat, 1 Feb 2020 07:12:39 -0800 Message-Id: <20200201151240.24082-12-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in fuse. Switching away from the read_cache_pages() helper gets rid of an implicit call to put_page(), so we can get rid of the get_page() call in fuse_readpages_fill(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/fuse/file.c | 37 +++++++++++++++++-------------------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/fs/fuse/file.c b/fs/fuse/file.c index ce715380143c..5460ff1bf155 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -911,9 +911,8 @@ struct fuse_fill_data { unsigned int max_pages; }; -static int fuse_readpages_fill(void *_data, struct page *page) +static int fuse_readpages_fill(struct fuse_fill_data *data, struct page *page) { - struct fuse_fill_data *data = _data; struct fuse_io_args *ia = data->ia; struct fuse_args_pages *ap = &ia->ap; struct inode *inode = data->inode; @@ -929,10 +928,8 @@ static int fuse_readpages_fill(void *_data, struct page *page) fc->max_pages); fuse_send_readpages(ia, data->file); data->ia = ia = fuse_io_alloc(NULL, data->max_pages); - if (!ia) { - unlock_page(page); + if (!ia) return -ENOMEM; - } ap = &ia->ap; } @@ -942,7 +939,6 @@ static int fuse_readpages_fill(void *_data, struct page *page) return -EIO; } - get_page(page); ap->pages[ap->num_pages] = page; ap->descs[ap->num_pages].length = PAGE_SIZE; ap->num_pages++; @@ -950,15 +946,13 @@ static int fuse_readpages_fill(void *_data, struct page *page) return 0; } -static int fuse_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +static unsigned fuse_readahead(struct file *file, struct address_space *mapping, + pgoff_t start, unsigned nr_pages) { struct inode *inode = mapping->host; struct fuse_conn *fc = get_fuse_conn(inode); struct fuse_fill_data data; - int err; - err = -EIO; if (is_bad_inode(inode)) goto out; @@ -966,21 +960,24 @@ static int fuse_readpages(struct file *file, struct address_space *mapping, data.inode = inode; data.nr_pages = nr_pages; data.max_pages = min_t(unsigned int, nr_pages, fc->max_pages); -; data.ia = fuse_io_alloc(NULL, data.max_pages); - err = -ENOMEM; if (!data.ia) goto out; - err = read_cache_pages(mapping, pages, fuse_readpages_fill, &data); - if (!err) { - if (data.ia->ap.num_pages) - fuse_send_readpages(data.ia, file); - else - fuse_io_free(data.ia); + while (nr_pages) { + struct page *page = readahead_page(mapping, start++); + + if (fuse_readpages_fill(&data, page) != 0) + goto out; + nr_pages--; } + + if (data.ia->ap.num_pages) + fuse_send_readpages(data.ia, file); + else + fuse_io_free(data.ia); out: - return err; + return nr_pages; } static ssize_t fuse_cache_read_iter(struct kiocb *iocb, struct iov_iter *to) @@ -3358,10 +3355,10 @@ static const struct file_operations fuse_file_operations = { static const struct address_space_operations fuse_file_aops = { .readpage = fuse_readpage, + .readahead = fuse_readahead, .writepage = fuse_writepage, .writepages = fuse_writepages, .launder_page = fuse_launder_page, - .readpages = fuse_readpages, .set_page_dirty = __set_page_dirty_nobuffers, .bmap = fuse_bmap, .direct_IO = fuse_direct_IO, From patchwork Sat Feb 1 15:12:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11361099 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BA7B14B4 for ; Sat, 1 Feb 2020 15:13:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0C3892075B for ; Sat, 1 Feb 2020 15:13:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rHk274sC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0C3892075B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D3F76B0604; Sat, 1 Feb 2020 10:12:48 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 370546B0605; Sat, 1 Feb 2020 10:12:48 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AD276B0606; Sat, 1 Feb 2020 10:12:48 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 1564A6B0604 for ; Sat, 1 Feb 2020 10:12:48 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CD8A42DFC for ; Sat, 1 Feb 2020 15:12:47 +0000 (UTC) X-FDA: 76441900374.18.music78_8cc616c301932 X-Spam-Summary: 2,0,0,af248690e83d5d14,d41d8cd98f00b204,willy@infradead.org,:linux-fsdevel@vger.kernel.org:willy@infradead.org::linux-kernel@vger.kernel.org:linux-xfs@vger.kernel.org,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4049:4119:4250:4321:4605:5007:6119:6261:6653:7576:7875:7903:8660:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13148:13230:13894:14096:14110:14394:21080:21451:21627:21809:21990:30029:30054:30070,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: music78_8cc616c301932 X-Filterd-Recvd-Size: 8096 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 15:12:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=iaqVzVhuC/rAMG5GyV/PP5zLsmDT1l1Fxt2CgSJICuU=; b=rHk274sCjxSmooM+bC5/z2rYuW d28bBu1Lx44GM1KoLhHc2MU2NB33qekjANJdolFCFoZGfb5AN1/W5Q8bVHTymECIx8eUjzs33KSo7 F0xpSrM62jzsWB/d4zevDjR1eOotwNp9D80VFTfa+wad6raqxU+IWBgKJ4/btdVf0A9j/RGDTnMBs IqDnV1PnISiUbHaoi3oiniO7jaDroyGKsj8+WsFfo/ry2fBHjnJfY+4Agg3VS/XcxwOV1eZemXa/o wZxpdJpsbKJLcJieYQNUeuSvWM1Y2fZj/4XQEdbCBZfVou/clUOvC+FPvH/SfrrcVg9wTXXc+a77Q FsHDyrNA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixuRu-0006IA-Ft; Sat, 01 Feb 2020 15:12:42 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH v4 12/12] iomap: Convert from readpages to readahead Date: Sat, 1 Feb 2020 07:12:40 -0800 Message-Id: <20200201151240.24082-13-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200201151240.24082-1-willy@infradead.org> References: <20200201151240.24082-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in XFS and iomap. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-xfs@vger.kernel.org --- fs/iomap/buffered-io.c | 72 +++++++++--------------------------------- fs/iomap/trace.h | 2 +- fs/xfs/xfs_aops.c | 10 +++--- include/linux/iomap.h | 2 +- 4 files changed, 22 insertions(+), 64 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index cb3511eb152a..490b66ea3298 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -216,7 +216,6 @@ struct iomap_readpage_ctx { bool cur_page_in_bio; bool is_readahead; struct bio *bio; - struct list_head *pages; }; static void @@ -367,36 +366,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_readpage); -static struct page * -iomap_next_page(struct inode *inode, struct list_head *pages, loff_t pos, - loff_t length, loff_t *done) -{ - while (!list_empty(pages)) { - struct page *page = lru_to_page(pages); - - if (page_offset(page) >= (u64)pos + length) - break; - - list_del(&page->lru); - if (!add_to_page_cache_lru(page, inode->i_mapping, page->index, - GFP_NOFS)) - return page; - - /* - * If we already have a page in the page cache at index we are - * done. Upper layers don't care if it is uptodate after the - * readpages call itself as every page gets checked again once - * actually needed. - */ - *done += PAGE_SIZE; - put_page(page); - } - - return NULL; -} - static loff_t -iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, +iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { struct iomap_readpage_ctx *ctx = data; @@ -410,10 +381,8 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, ctx->cur_page = NULL; } if (!ctx->cur_page) { - ctx->cur_page = iomap_next_page(inode, ctx->pages, - pos, length, &done); - if (!ctx->cur_page) - break; + ctx->cur_page = readahead_page(inode->i_mapping, + pos / PAGE_SIZE); ctx->cur_page_in_bio = false; } ret = iomap_readpage_actor(inode, pos + done, length - done, @@ -423,48 +392,37 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, return done; } -int -iomap_readpages(struct address_space *mapping, struct list_head *pages, +unsigned +iomap_readahead(struct address_space *mapping, pgoff_t start, unsigned nr_pages, const struct iomap_ops *ops) { struct iomap_readpage_ctx ctx = { - .pages = pages, .is_readahead = true, }; - loff_t pos = page_offset(list_entry(pages->prev, struct page, lru)); - loff_t last = page_offset(list_entry(pages->next, struct page, lru)); - loff_t length = last - pos + PAGE_SIZE, ret = 0; + loff_t pos = start * PAGE_SIZE; + loff_t length = nr_pages * PAGE_SIZE; - trace_iomap_readpages(mapping->host, nr_pages); + trace_iomap_readahead(mapping->host, nr_pages); while (length > 0) { - ret = iomap_apply(mapping->host, pos, length, 0, ops, - &ctx, iomap_readpages_actor); + loff_t ret = iomap_apply(mapping->host, pos, length, 0, ops, + &ctx, iomap_readahead_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); - goto done; + break; } pos += ret; length -= ret; } - ret = 0; -done: + if (ctx.bio) submit_bio(ctx.bio); - if (ctx.cur_page) { - if (!ctx.cur_page_in_bio) - unlock_page(ctx.cur_page); + if (ctx.cur_page && ctx.cur_page_in_bio) put_page(ctx.cur_page); - } - /* - * Check that we didn't lose a page due to the arcance calling - * conventions.. - */ - WARN_ON_ONCE(!ret && !list_empty(ctx.pages)); - return ret; + return length / PAGE_SIZE; } -EXPORT_SYMBOL_GPL(iomap_readpages); +EXPORT_SYMBOL_GPL(iomap_readahead); /* * iomap_is_partially_uptodate checks whether blocks within a page are diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h index 6dc227b8c47e..d6ba705f938a 100644 --- a/fs/iomap/trace.h +++ b/fs/iomap/trace.h @@ -39,7 +39,7 @@ DEFINE_EVENT(iomap_readpage_class, name, \ TP_PROTO(struct inode *inode, int nr_pages), \ TP_ARGS(inode, nr_pages)) DEFINE_READPAGE_EVENT(iomap_readpage); -DEFINE_READPAGE_EVENT(iomap_readpages); +DEFINE_READPAGE_EVENT(iomap_readahead); DECLARE_EVENT_CLASS(iomap_page_class, TP_PROTO(struct inode *inode, struct page *page, unsigned long off, diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 3a688eb5c5ae..4d9da34e759b 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -621,14 +621,14 @@ xfs_vm_readpage( return iomap_readpage(page, &xfs_read_iomap_ops); } -STATIC int -xfs_vm_readpages( +STATIC unsigned +xfs_vm_readahead( struct file *unused, struct address_space *mapping, - struct list_head *pages, + pgoff_t start, unsigned nr_pages) { - return iomap_readpages(mapping, pages, nr_pages, &xfs_read_iomap_ops); + return iomap_readahead(mapping, start, nr_pages, &xfs_read_iomap_ops); } static int @@ -644,7 +644,7 @@ xfs_iomap_swapfile_activate( const struct address_space_operations xfs_address_space_operations = { .readpage = xfs_vm_readpage, - .readpages = xfs_vm_readpages, + .readahead = xfs_vm_readahead, .writepage = xfs_vm_writepage, .writepages = xfs_vm_writepages, .set_page_dirty = iomap_set_page_dirty, diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 8b09463dae0d..81c6067e9b61 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -155,7 +155,7 @@ loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length, ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, const struct iomap_ops *ops); int iomap_readpage(struct page *page, const struct iomap_ops *ops); -int iomap_readpages(struct address_space *mapping, struct list_head *pages, +unsigned iomap_readahead(struct address_space *, pgoff_t start, unsigned nr_pages, const struct iomap_ops *ops); int iomap_set_page_dirty(struct page *page); int iomap_is_partially_uptodate(struct page *page, unsigned long from,