From patchwork Mon Jan 13 15:37:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC7E01398 for ; Mon, 13 Jan 2020 15:38:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88F882081E for ; Mon, 13 Jan 2020 15:38:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bVGbFhVM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88F882081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 036D98E0016; Mon, 13 Jan 2020 10:37:57 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F047C8E0012; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF5898E0016; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 9C51C8E0015 for ; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 5DFDD181AC9CB for ; Mon, 13 Jan 2020 15:37:56 +0000 (UTC) X-FDA: 76373016552.11.bells12_7f746f9721037 X-Spam-Summary: 2,0,0,128bfadef79b3401,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3352:3865:3866:3868:3870:3874:4321:5007:6261:6653:7576:10004:11026:11658:11914:12291:12297:12555:12895:13069:13161:13229:13311:13357:13894:14181:14384:14394:14721:21080:21451:21627:21987:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bells12_7f746f9721037 X-Filterd-Recvd-Size: 3376 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=vLB2jRQV91vKQomphjORNZkEWBxCp59XHq5vnywYunQ=; b=bVGbFhVMMT+7TVZGCO2aflgl8H O8Z4R/lI3cOpP/TzqMnwKTMOOkltgN2B5Vj2gd+AZqUmlOpANyZy/enEkr8308SQEZhXHi1PzN9uv eja1wuOO7oYjbUtnO29PI4gX+9j9P9fcbBGXBp35l2/NsL/H/Ila+Er32x4iGNBNdAkFuUqDKNppq 6JtacCHZo/d3EcmdCE4sVkRfmQTjYW91ywNs+Emooh4ZnU2s4kePsHhAOoLT6+8NYtLcNHfkK6K+h tBRVJivAofvygLZwAL+cWVvYkAbLi12vsApT4qx0DB3F5uFLeY0rS3NrJEM5+B42VuKSBzrjXLHRj BYkqKLng==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00075a-1j; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 1/8] pagevec: Add an iterator Date: Mon, 13 Jan 2020 07:37:39 -0800 Message-Id: <20200113153746.26654-2-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" There's plenty of space in the pagevec for a loop counter, and that will come in handy later. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 081d934eda64..9b8c43661ab3 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -19,6 +19,7 @@ struct address_space; struct pagevec { unsigned char nr; + unsigned char first; bool percpu_pvec_drained; struct page *pages[PAGEVEC_SIZE]; }; @@ -55,12 +56,14 @@ static inline unsigned pagevec_lookup_tag(struct pagevec *pvec, static inline void pagevec_init(struct pagevec *pvec) { pvec->nr = 0; + pvec->first = 0; pvec->percpu_pvec_drained = false; } static inline void pagevec_reinit(struct pagevec *pvec) { pvec->nr = 0; + pvec->first = 0; } static inline unsigned pagevec_count(struct pagevec *pvec) @@ -88,4 +91,21 @@ static inline void pagevec_release(struct pagevec *pvec) __pagevec_release(pvec); } +static inline struct page *pagevec_last(struct pagevec *pvec) +{ + if (pvec->nr == 0) + return NULL; + return pvec->pages[pvec->nr - 1]; +} + +static inline struct page *pagevec_next(struct pagevec *pvec) +{ + if (pvec->first >= pvec->nr) + return NULL; + return pvec->pages[pvec->first++]; +} + +#define pagevec_for_each(pvec, page) \ + while ((page = pagevec_next(pvec))) + #endif /* _LINUX_PAGEVEC_H */ From patchwork Mon Jan 13 15:37:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F05592A for ; Mon, 13 Jan 2020 15:38:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A3C321734 for ; Mon, 13 Jan 2020 15:38:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="iFiz31mv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A3C321734 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CBC738E0018; Mon, 13 Jan 2020 10:37:58 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C1C808E0012; Mon, 13 Jan 2020 10:37:58 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B33448E0018; Mon, 13 Jan 2020 10:37:58 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id 967D38E0012 for ; Mon, 13 Jan 2020 10:37:58 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 53F2C181AC9CB for ; Mon, 13 Jan 2020 15:37:58 +0000 (UTC) X-FDA: 76373016636.09.fan52_7fc45079d824c X-Spam-Summary: 40,2.5,0,d13f86cdcde295eb,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:4321:5007:6261:6653:7576:10011:11026:11658:11914:12043:12114:12219:12296:12297:12438:12555:12663:12895:13869:13894:14096:14181:14394:14721:21080:21451:21627:21990:30034:30054:30074,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: fan52_7fc45079d824c X-Filterd-Recvd-Size: 3804 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7T6itBiApRciaky4TlalX8Xey/dXpsbhmN7GfbwBCQQ=; b=iFiz31mvlvB3F6V2/KzYd35pGt wnp/9Khlrp6Ks1AfTMi8ggNFuMEvwljAsoUT6Fd7KNY5kWrArU9md8WO/2jb5DQUZyRx5JltVNlsG 9Sn7kZKGvrsR3HQNu1TZgNDUTK41FhI0YSJET42oZm/zJLR3VGnXtL3pxV6ooOYJCBDFLb2TrXh5H 5PaYFUKnDEGt82/8ODHp9AE33gQmXZMJoLYn/hy3n3Qfg/NomfJieYUirOeJl+kRqcEXijqT8WUg1 Xwy+R8s17c2KZMLGywc1RDNoNM3ZZ1zWg1cKy3pnKfurg/HrMl38mHkZ+3ngS6eXPEYUqQ8ZZ+Su7 lBO1JPZg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00075f-3B; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 2/8] mm: Fix the return type of __do_page_cache_readahead Date: Mon, 13 Jan 2020 07:37:40 -0800 Message-Id: <20200113153746.26654-3-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" ra_submit() which is a wrapper around __do_page_cache_readahead() already returns an unsigned long, and the 'nr_to_read' parameter is an unsigned long, so fix __do_page_cache_readahead() to return an unsigned long, even though I'm pretty sure we're not going to readahead more than 2^32 pages ever. Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 2 +- mm/readahead.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 3cf20ab3ca01..41b93c4b3ab7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,7 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -extern unsigned int __do_page_cache_readahead(struct address_space *mapping, +extern unsigned long __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, unsigned long lookahead_size); diff --git a/mm/readahead.c b/mm/readahead.c index 2fe72cd29b47..6bf73ef33b7e 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -152,7 +152,7 @@ static int read_pages(struct address_space *mapping, struct file *filp, * * Returns the number of pages requested, or the maximum amount of I/O allowed. */ -unsigned int __do_page_cache_readahead(struct address_space *mapping, +unsigned long __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, unsigned long lookahead_size) { @@ -161,7 +161,7 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping, unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); int page_idx; - unsigned int nr_pages = 0; + unsigned long nr_pages = 0; loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); From patchwork Mon Jan 13 15:37:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C97771398 for ; Mon, 13 Jan 2020 15:38:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C1942081E for ; Mon, 13 Jan 2020 15:38:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="h5nnknDp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C1942081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4559F8E0017; Mon, 13 Jan 2020 10:37:57 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3E29C8E0012; Mon, 13 Jan 2020 10:37:57 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 215318E0018; Mon, 13 Jan 2020 10:37:57 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id DCDA38E0017 for ; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 950E8181AC9CC for ; Mon, 13 Jan 2020 15:37:56 +0000 (UTC) X-FDA: 76373016552.23.step10_7f812e56c3d55 X-Spam-Summary: 57,3.5,0,ae0561aabc2ee38f,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3743:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4423:5007:6261:6653:7576:7875:7903:8957:9010:9036:9592:10013:11026:11232:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13894:14096:14181:14394:14721:21063:21080:21451:21627:21990:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:2:0,LFtime:3,LUA_SUMMARY:none X-HE-Tag: step10_7f812e56c3d55 X-Filterd-Recvd-Size: 6665 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=dOAadIMTUupd20Eg2Q5RqFxOIthPxhUDYq6Q/juU65U=; b=h5nnknDpxtg47EUFlq9egYwY2Q BVp+XqvUz3RyOjCxys3SgxE+ritoHjVvgyDZ36NkMrrrwfr2kC2knN1NP0CoGkVdvtuFhj2Yvf4tA VjSYy3z+eL3rL11kVEmD7VKk1EGwVF9GyZ7wc8sscVYOXXEURYqq314o60EJJWx/6ToUW5CoKJjAn DpuvjUPQ5hkFpWzMcnaTNX+I0A70aAjno35EMwqvH8u+pFuUGSYiQjJrf8chJCmwN5w6GYyvu31GU pa2Kqn/JHsmhmGP6MdHRRtR0ZidTpsH5FAet4RtCa4BKvSNH4Xi2reB0reKS+G7/yxIkwXXkq0YYg vYkIgTuw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00075z-5B; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 3/8] mm: Use a pagevec for readahead Date: Mon, 13 Jan 2020 07:37:41 -0800 Message-Id: <20200113153746.26654-4-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Instead of using a linked list, use a small array. This does mean we will allocate and then submit for I/O no more than 15 pages at a time (60kB), but we have the block queue plugged so the bios can be combined afterwards. We generally don't readahead more than 256kB anyway, so this is not a huge reduction in efficiency, and we'll make up for it with later patches. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 97 +++++++++++++++++++++++++++----------------------- 1 file changed, 52 insertions(+), 45 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 6bf73ef33b7e..76a70a4406b5 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -113,35 +113,37 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages, EXPORT_SYMBOL(read_cache_pages); -static int read_pages(struct address_space *mapping, struct file *filp, - struct list_head *pages, unsigned int nr_pages, gfp_t gfp) +/* + * We ignore I/O errors - they will be handled by the actual consumer of + * the data that we attempted to prefetch. + */ +static unsigned read_pages(struct address_space *mapping, struct file *filp, + struct pagevec *pvec, pgoff_t offset, gfp_t gfp) { - struct blk_plug plug; - unsigned page_idx; - int ret; - - blk_start_plug(&plug); + struct page *page; + unsigned int nr_pages = pagevec_count(pvec); if (mapping->a_ops->readpages) { - ret = mapping->a_ops->readpages(filp, mapping, pages, nr_pages); - /* Clean up the remaining pages */ - put_pages_list(pages); - goto out; - } + LIST_HEAD(pages); - for (page_idx = 0; page_idx < nr_pages; page_idx++) { - struct page *page = lru_to_page(pages); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) - mapping->a_ops->readpage(filp, page); - put_page(page); + pagevec_for_each(pvec, page) { + page->index = offset++; + list_add(&page->lru, &pages); + } + mapping->a_ops->readpages(filp, mapping, &pages, nr_pages); + /* Clean up the remaining pages */ + put_pages_list(&pages); + } else { + pagevec_for_each(pvec, page) { + if (!add_to_page_cache_lru(page, mapping, offset++, + gfp)) + mapping->a_ops->readpage(filp, page); + put_page(page); + } } - ret = 0; -out: - blk_finish_plug(&plug); - - return ret; + pagevec_reinit(pvec); + return nr_pages; } /* @@ -159,59 +161,64 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, struct inode *inode = mapping->host; struct page *page; unsigned long end_index; /* The last page we want to read */ - LIST_HEAD(page_pool); + struct pagevec pages; int page_idx; + pgoff_t page_offset = offset; unsigned long nr_pages = 0; loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); + struct blk_plug plug; + + blk_start_plug(&plug); if (isize == 0) goto out; end_index = ((isize - 1) >> PAGE_SHIFT); + pagevec_init(&pages); /* * Preallocate as many pages as we will need. */ for (page_idx = 0; page_idx < nr_to_read; page_idx++) { - pgoff_t page_offset = offset + page_idx; + page_offset++; if (page_offset > end_index) break; page = xa_load(&mapping->i_pages, page_offset); + + /* + * Page already present? Kick off the current batch of + * contiguous pages before continuing with the next batch. + */ if (page && !xa_is_value(page)) { - /* - * Page already present? Kick off the current batch of - * contiguous pages before continuing with the next - * batch. - */ - if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, - gfp_mask); - nr_pages = 0; + unsigned int count = pagevec_count(&pages); + + if (count) + nr_pages += read_pages(mapping, filp, &pages, + offset, gfp_mask); + offset = page_offset + 1; continue; } page = __page_cache_alloc(gfp_mask); if (!page) break; - page->index = page_offset; - list_add(&page->lru, &page_pool); + if (pagevec_add(&pages, page) == 0) { + nr_pages += read_pages(mapping, filp, &pages, + offset, gfp_mask); + offset = page_offset + 1; + } if (page_idx == nr_to_read - lookahead_size) SetPageReadahead(page); - nr_pages++; } - /* - * Now start the IO. We ignore I/O errors - if the page is not - * uptodate then the caller will launch readpage again, and - * will then handle the error. - */ - if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, gfp_mask); - BUG_ON(!list_empty(&page_pool)); + if (pagevec_count(&pages)) + nr_pages += read_pages(mapping, filp, &pages, offset, gfp_mask); out: + blk_finish_plug(&plug); + return nr_pages; } From patchwork Mon Jan 13 15:37:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330471 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E70E192A for ; Mon, 13 Jan 2020 15:38:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA8982081E for ; Mon, 13 Jan 2020 15:38:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="n14KrJrF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA8982081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF8858E0015; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B87B68E0012; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FBEC8E0016; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 86F128E0012 for ; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 419C340CA for ; Mon, 13 Jan 2020 15:37:56 +0000 (UTC) X-FDA: 76373016552.21.park85_7f748f2819315 X-Spam-Summary: 2,0,0,436dc95cc0809995,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:2:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:1801:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4118:4250:4321:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9010:10004:11026:11232:11473:11658:11914:12043:12291:12295:12296:12297:12438:12555:12679:12895:13148:13230:13894:14096:14394:21080:21451:21627:21972:21990:30054:30090,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:20,LUA_SUMMARY:none X-HE-Tag: park85_7f748f2819315 X-Filterd-Recvd-Size: 7624 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SXAvnYjcl6qequNwK8pgR/rQT6gLXY9+s/CAcN+XqLc=; b=n14KrJrFKqwJGKJyWhG5HgE/nm Np64Aehxm7IYOTM+02fPOAQ23CiYbX1GuQojaxO0PNE2Q939pySyF5PzKQa96oJuO5rcOBz9mPcSz 4eZ927zi+xeCjUN26AStyqAYnB1hsoxuhaDQzDb6A2D5BXsg7ZeXBtZ10EnIR1Num8EGLq/l67Okm 6mOzRPBzy9EYFHt0m3wfmuq0Myag0sknVP0wXdu/UAtLweNBi/VAA564YZ+7zcpKcuxWjD3nNTE+4 OzcsSNfHmwGLLLaQ7x84znlybgQy9zySk8UN2oLSPoaWQj0pzkC3qIUt6jHVucwRZtO0AZIFbD7uj k/K8FPWA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076I-6S; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 4/8] mm/fs: Add a_ops->readahead Date: Mon, 13 Jan 2020 07:37:42 -0800 Message-Id: <20200113153746.26654-5-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This will replace ->readpages with a saner interface: - No return type (errors are ignored for read ahead anyway) - Pages are already in the page cache when ->readpages is called - Pages are passed in a pagevec instead of a linked list Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 8 +++++- Documentation/filesystems/vfs.rst | 9 ++++++ include/linux/fs.h | 3 ++ mm/readahead.c | 40 ++++++++++++++++++++++++++- 4 files changed, 58 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 5057e4d9dcd1..1e2f1186fd1a 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -239,6 +239,8 @@ prototypes:: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + int (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t index); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -271,7 +273,8 @@ writepage: yes, unlocks (see below) readpage: yes, unlocks writepages: set_page_dirty no -readpages: +readpages: no +readahead: yes, unlocks write_begin: locks the page exclusive write_end: yes, unlocks exclusive bmap: @@ -298,6 +301,9 @@ completion. ->readpages() populates the pagecache with the passed pages and starts I/O against them. They come unlocked upon I/O completion. +->readahead() starts I/O against the pages. They come unlocked upon +I/O completion. + ->writepage() is used for two purposes: for "memory cleansing" and for "sync". These are quite different operations and the behaviour may differ depending upon the mode. diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index 7d4d09dd5e6d..63d0f0dbbf9c 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -706,6 +706,8 @@ cache in your filesystem. The following members are defined: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + int (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t index); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -781,6 +783,13 @@ cache in your filesystem. The following members are defined: If defined, it should set the PageDirty flag, and the PAGECACHE_TAG_DIRTY tag in the radix tree. +``readahead`` + called by the VM to read pages associated with the address_space + object. This is essentially a vector version of readpage. + Instead of just one page, several pages are requested. + Since this is readahead, attempt to start I/O on each page and + let the I/O completion path set errors on the page. + ``readpages`` called by the VM to read pages associated with the address_space object. This is essentially just a vector version of readpage. diff --git a/include/linux/fs.h b/include/linux/fs.h index 98e0349adb52..2769f89666fb 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -52,6 +52,7 @@ struct hd_geometry; struct iovec; struct kiocb; struct kobject; +struct pagevec; struct pipe_inode_info; struct poll_table_struct; struct kstatfs; @@ -375,6 +376,8 @@ struct address_space_operations { */ int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); + void (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t offset); int (*write_begin)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, diff --git a/mm/readahead.c b/mm/readahead.c index 76a70a4406b5..2fe0974173ea 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -123,7 +123,45 @@ static unsigned read_pages(struct address_space *mapping, struct file *filp, struct page *page; unsigned int nr_pages = pagevec_count(pvec); - if (mapping->a_ops->readpages) { + if (mapping->a_ops->readahead) { + /* + * When we remove support for ->readpages, we'll call + * add_to_page_cache_lru() in the parent and all this + * grot goes away. + */ + unsigned char first = pvec->first; + unsigned char saved_nr = pvec->nr; + pgoff_t base = offset; + pagevec_for_each(pvec, page) { + if (!add_to_page_cache_lru(page, mapping, offset++, + gfp)) { + unsigned char saved_first = pvec->first; + + pvec->nr = pvec->first - 1; + pvec->first = first; + mapping->a_ops->readahead(filp, mapping, pvec, + base + first); + first = pvec->nr + 1; + pvec->nr = saved_nr; + pvec->first = saved_first; + + put_page(page); + } + } + pvec->first = first; + offset = base + first; + mapping->a_ops->readahead(filp, mapping, pvec, offset); + /* + * Ideally the implementation would at least attempt to + * start I/O against all the pages, but there are times + * when it makes more sense to just give up. Take care + * of any un-attempted pages here. + */ + pagevec_for_each(pvec, page) { + unlock_page(page); + put_page(page); + } + } else if (mapping->a_ops->readpages) { LIST_HEAD(pages); pagevec_for_each(pvec, page) { From patchwork Mon Jan 13 15:37:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6F031398 for ; Mon, 13 Jan 2020 15:38:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 66E432081E for ; Mon, 13 Jan 2020 15:38:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="idavc8Cc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66E432081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6E958E0003; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB07A8E0014; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 788E98E0012; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 285288E0003 for ; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id DB038824934B for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) X-FDA: 76373016468.15.time84_7f3b60d3fa814 X-Spam-Summary: 2,0,0,6e0ba34d8df48cf0,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4050:4120:4250:4321:4605:5007:6119:6261:6653:7576:7903:8660:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13148:13230:13894:14096:14110:14394:21080:21451:21627:21809:21990:30029:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: time84_7f3b60d3fa814 X-Filterd-Recvd-Size: 9360 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=XzLp8VMemTuCcHOHx3h9ZSjkeqJlb8tYYS2nICefEuM=; b=idavc8Cc/pyoNqz+6SqHd54F6q dKO2M2mVSfeeLE708RbcR+IFE5XWFq0q8HyEUhSjwweiy3IIZXX9QNKXKZwMgzIQW/KU2IWavOOfD K14AmlaHnSJm9I5bErummNEc6FJgCxWI63FW3c6O7Ofjwl71bVTXQ6foxzRFfUDgO1sQvc9Q2wikF /dlrNIsYGq01es9DhIPtAHN2dJ2QDEuq+ILiCUuSIcf2ydL97xJ/JCuwj8a541Yuw7XRUqCoG7wej 1bfawDu8CNs2pR3+8WJY4Fs91nn0h1NaMWmTWfkNZikGM/KUQKtPoXXBS+glzTzwpBOHnvs842iTp QP1uoiOg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076U-7n; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 5/8] iomap,xfs: Convert from readpages to readahead Date: Mon, 13 Jan 2020 07:37:43 -0800 Message-Id: <20200113153746.26654-6-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in XFS and iomap. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 60 +++++++++--------------------------------- fs/iomap/trace.h | 18 ++++++------- fs/xfs/xfs_aops.c | 12 ++++----- include/linux/iomap.h | 4 +-- 4 files changed, 29 insertions(+), 65 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 828444e14d09..818fa5bbd643 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -216,7 +217,7 @@ struct iomap_readpage_ctx { bool cur_page_in_bio; bool is_readahead; struct bio *bio; - struct list_head *pages; + struct pagevec *pages; }; static void @@ -337,7 +338,7 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) unsigned poff; loff_t ret; - trace_iomap_readpage(page->mapping->host, 1); + trace_iomap_readpage(page->mapping->host, (loff_t)PAGE_SIZE); for (poff = 0; poff < PAGE_SIZE; poff += ret) { ret = iomap_apply(inode, page_offset(page) + poff, @@ -367,36 +368,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_readpage); -static struct page * -iomap_next_page(struct inode *inode, struct list_head *pages, loff_t pos, - loff_t length, loff_t *done) -{ - while (!list_empty(pages)) { - struct page *page = lru_to_page(pages); - - if (page_offset(page) >= (u64)pos + length) - break; - - list_del(&page->lru); - if (!add_to_page_cache_lru(page, inode->i_mapping, page->index, - GFP_NOFS)) - return page; - - /* - * If we already have a page in the page cache at index we are - * done. Upper layers don't care if it is uptodate after the - * readpages call itself as every page gets checked again once - * actually needed. - */ - *done += PAGE_SIZE; - put_page(page); - } - - return NULL; -} - static loff_t -iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, +iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { struct iomap_readpage_ctx *ctx = data; @@ -410,8 +383,7 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, ctx->cur_page = NULL; } if (!ctx->cur_page) { - ctx->cur_page = iomap_next_page(inode, ctx->pages, - pos, length, &done); + ctx->cur_page = pagevec_next(ctx->pages); if (!ctx->cur_page) break; ctx->cur_page_in_bio = false; @@ -423,23 +395,22 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, return done; } -int -iomap_readpages(struct address_space *mapping, struct list_head *pages, - unsigned nr_pages, const struct iomap_ops *ops) +void iomap_readahead(struct address_space *mapping, struct pagevec *pages, + pgoff_t index, const struct iomap_ops *ops) { struct iomap_readpage_ctx ctx = { .pages = pages, .is_readahead = true, }; - loff_t pos = page_offset(list_entry(pages->prev, struct page, lru)); - loff_t last = page_offset(list_entry(pages->next, struct page, lru)); + loff_t pos = (loff_t)index << PAGE_SHIFT; + loff_t last = page_offset(pagevec_last(pages)); loff_t length = last - pos + PAGE_SIZE, ret = 0; - trace_iomap_readpages(mapping->host, nr_pages); + trace_iomap_readahead(mapping->host, length); while (length > 0) { ret = iomap_apply(mapping->host, pos, length, 0, ops, - &ctx, iomap_readpages_actor); + &ctx, iomap_readahead_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); goto done; @@ -456,15 +427,8 @@ iomap_readpages(struct address_space *mapping, struct list_head *pages, unlock_page(ctx.cur_page); put_page(ctx.cur_page); } - - /* - * Check that we didn't lose a page due to the arcance calling - * conventions.. - */ - WARN_ON_ONCE(!ret && !list_empty(ctx.pages)); - return ret; } -EXPORT_SYMBOL_GPL(iomap_readpages); +EXPORT_SYMBOL_GPL(iomap_readahead); /* * iomap_is_partially_uptodate checks whether blocks within a page are diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h index 6dc227b8c47e..adbfd9fd4275 100644 --- a/fs/iomap/trace.h +++ b/fs/iomap/trace.h @@ -16,30 +16,30 @@ struct inode; DECLARE_EVENT_CLASS(iomap_readpage_class, - TP_PROTO(struct inode *inode, int nr_pages), - TP_ARGS(inode, nr_pages), + TP_PROTO(struct inode *inode, loff_t length), + TP_ARGS(inode, length), TP_STRUCT__entry( __field(dev_t, dev) __field(u64, ino) - __field(int, nr_pages) + __field(loff_t, length) ), TP_fast_assign( __entry->dev = inode->i_sb->s_dev; __entry->ino = inode->i_ino; - __entry->nr_pages = nr_pages; + __entry->length = length; ), - TP_printk("dev %d:%d ino 0x%llx nr_pages %d", + TP_printk("dev %d:%d ino 0x%llx length %lld", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, - __entry->nr_pages) + __entry->length) ) #define DEFINE_READPAGE_EVENT(name) \ DEFINE_EVENT(iomap_readpage_class, name, \ - TP_PROTO(struct inode *inode, int nr_pages), \ - TP_ARGS(inode, nr_pages)) + TP_PROTO(struct inode *inode, loff_t length), \ + TP_ARGS(inode, length)) DEFINE_READPAGE_EVENT(iomap_readpage); -DEFINE_READPAGE_EVENT(iomap_readpages); +DEFINE_READPAGE_EVENT(iomap_readahead); DECLARE_EVENT_CLASS(iomap_page_class, TP_PROTO(struct inode *inode, struct page *page, unsigned long off, diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 3a688eb5c5ae..e3db35bcfa34 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -621,14 +621,14 @@ xfs_vm_readpage( return iomap_readpage(page, &xfs_read_iomap_ops); } -STATIC int -xfs_vm_readpages( +STATIC void +xfs_vm_readahead( struct file *unused, struct address_space *mapping, - struct list_head *pages, - unsigned nr_pages) + struct pagevec *pages, + pgoff_t index) { - return iomap_readpages(mapping, pages, nr_pages, &xfs_read_iomap_ops); + iomap_readahead(mapping, pages, index, &xfs_read_iomap_ops); } static int @@ -644,7 +644,7 @@ xfs_iomap_swapfile_activate( const struct address_space_operations xfs_address_space_operations = { .readpage = xfs_vm_readpage, - .readpages = xfs_vm_readpages, + .readahead = xfs_vm_readahead, .writepage = xfs_vm_writepage, .writepages = xfs_vm_writepages, .set_page_dirty = iomap_set_page_dirty, diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 8b09463dae0d..1af1ec0920d8 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -155,8 +155,8 @@ loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length, ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, const struct iomap_ops *ops); int iomap_readpage(struct page *page, const struct iomap_ops *ops); -int iomap_readpages(struct address_space *mapping, struct list_head *pages, - unsigned nr_pages, const struct iomap_ops *ops); +void iomap_readahead(struct address_space *mapping, struct pagevec *pages, + pgoff_t index, const struct iomap_ops *ops); int iomap_set_page_dirty(struct page *page); int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count); From patchwork Mon Jan 13 15:37:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C47A892A for ; Mon, 13 Jan 2020 15:38:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 845742081E for ; Mon, 13 Jan 2020 15:38:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="pfVJyi71" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 845742081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9FFDA8E0011; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 937F18E0003; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B5198E0014; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id 2F1C48E0011 for ; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id D1021824999B for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) X-FDA: 76373016468.17.bee49_7f3dfb6b60c40 X-Spam-Summary: 2,0,0,3af313be90c7ec98,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:2:41:69:142:355:379:541:800:960:966:973:979:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4050:4120:4250:4321:4383:4385:4395:4605:5007:6119:6261:6653:7576:7903:8784:8957:9010:9108:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13894:14110:14394:21080:21450:21451:21627:21990:30054:30056:30070:30091,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bee49_7f3dfb6b60c40 X-Filterd-Recvd-Size: 9148 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/YqWOqUUs8RSFa3ffU4BQTUKpG67TjfYQGjVpA3xoSU=; b=pfVJyi71Ce4BNn0c+BJ3P99nBw 3hbi7tvkAjeU/zLTNVnpi9PuXEsJYZnwVxGSHUmqLpVT6KVvXPvCzfSSDVHxIHacmX0TZvScsSPyO +i+kbHASx3yTaEKq9VHRsgJbFEt9rC6MA55I0o0mNeVcVk+JIgfp2IF8lwJK9npNDneW/yzlfBeHT vszeYJIXp0r/SBUCz/0K3g7p463EKtIh5ccANObOLM9/8SbAYfI2vFdOQW+Jxf0tqmaTaiA1uGlBM BE8gfdee00kt0INiZMvRXNH5h2AKjAeeeGX3/Eaov56gqocWPn+VIVrKaDLMAxC+wDlLVCSH/U7TT oFCRpCKQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076j-9X; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 6/8] cifs: Convert from readpages to readahead Date: Mon, 13 Jan 2020 07:37:44 -0800 Message-Id: <20200113153746.26654-7-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in CIFS Signed-off-by: Matthew Wilcox (Oracle) --- fs/cifs/file.c | 125 ++++++++----------------------------------------- 1 file changed, 19 insertions(+), 106 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 043288b5c728..816670b501d8 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4280,70 +4280,10 @@ cifs_readpages_copy_into_pages(struct TCP_Server_Info *server, return readpages_fill_pages(server, rdata, iter, iter->count); } -static int -readpages_get_pages(struct address_space *mapping, struct list_head *page_list, - unsigned int rsize, struct list_head *tmplist, - unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) -{ - struct page *page, *tpage; - unsigned int expected_index; - int rc; - gfp_t gfp = readahead_gfp_mask(mapping); - - INIT_LIST_HEAD(tmplist); - - page = lru_to_page(page_list); - - /* - * Lock the page and put it in the cache. Since no one else - * should have access to this page, we're safe to simply set - * PG_locked without checking it first. - */ - __SetPageLocked(page); - rc = add_to_page_cache_locked(page, mapping, - page->index, gfp); - - /* give up if we can't stick it in the cache */ - if (rc) { - __ClearPageLocked(page); - return rc; - } - - /* move first page to the tmplist */ - *offset = (loff_t)page->index << PAGE_SHIFT; - *bytes = PAGE_SIZE; - *nr_pages = 1; - list_move_tail(&page->lru, tmplist); - - /* now try and add more pages onto the request */ - expected_index = page->index + 1; - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { - /* discontinuity ? */ - if (page->index != expected_index) - break; - - /* would this page push the read over the rsize? */ - if (*bytes + PAGE_SIZE > rsize) - break; - - __SetPageLocked(page); - if (add_to_page_cache_locked(page, mapping, page->index, gfp)) { - __ClearPageLocked(page); - break; - } - list_move_tail(&page->lru, tmplist); - (*bytes) += PAGE_SIZE; - expected_index++; - (*nr_pages)++; - } - return rc; -} - -static int cifs_readpages(struct file *file, struct address_space *mapping, - struct list_head *page_list, unsigned num_pages) +static void cifs_readahead(struct file *file, struct address_space *mapping, + struct pagevec *pages, pgoff_t index) { int rc; - struct list_head tmplist; struct cifsFileInfo *open_file = file->private_data; struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); struct TCP_Server_Info *server; @@ -4358,11 +4298,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * After this point, every page in the list might have PG_fscache set, * so we will need to clean that up off of every page we don't use. */ - rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list, - &num_pages); + rc = -ENOBUFS; if (rc == 0) { free_xid(xid); - return rc; + return; } if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) @@ -4373,24 +4312,12 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rc = 0; server = tlink_tcon(open_file->tlink)->ses->server; - cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", - __func__, file, mapping, num_pages); + cifs_dbg(FYI, "%s: file=%p mapping=%p index=%lu\n", + __func__, file, mapping, index); - /* - * Start with the page at end of list and move it to private - * list. Do the same with any following pages until we hit - * the rsize limit, hit an index discontinuity, or run out of - * pages. Issue the async read and then start the loop again - * until the list is empty. - * - * Note that list order is important. The page_list is in - * the order of declining indexes. When we put the pages in - * the rdata->pages, then we want them in increasing order. - */ - while (!list_empty(page_list)) { - unsigned int i, nr_pages, bytes, rsize; - loff_t offset; - struct page *page, *tpage; + while (pages->first < pages->nr) { + unsigned int i, nr_pages, rsize; + struct page *page; struct cifs_readdata *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -4408,6 +4335,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, if (rc) break; + nr_pages = rsize / PAGE_SIZE; /* * Give up immediately if rsize is too small to read an entire * page. The VFS will fall back to readpage. We should never @@ -4415,36 +4343,23 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * rsize is smaller than a cache page. */ if (unlikely(rsize < PAGE_SIZE)) { - add_credits_and_wake_if(server, credits, 0); - free_xid(xid); - return 0; - } - - rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, - &nr_pages, &offset, &bytes); - if (rc) { add_credits_and_wake_if(server, credits, 0); break; } + if (nr_pages > pagevec_count(pages)) + nr_pages = pagevec_count(pages); + rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) { /* best to give up if we're out of mem */ - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - lru_cache_add_file(page); - unlock_page(page); - put_page(page); - } - rc = -ENOMEM; add_credits_and_wake_if(server, credits, 0); break; } rdata->cfile = cifsFileInfo_get(open_file); rdata->mapping = mapping; - rdata->offset = offset; - rdata->bytes = bytes; + rdata->offset = index; rdata->pid = pid; rdata->pagesz = PAGE_SIZE; rdata->tailsz = PAGE_SIZE; @@ -4452,9 +4367,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->copy_into_pages = cifs_readpages_copy_into_pages; rdata->credits = credits_on_stack; - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - rdata->pages[rdata->nr_pages++] = page; + for (i = 0; i < rdata->nr_pages; i++) { + rdata->pages[rdata->nr_pages++] = pagevec_next(pages); + index++; + rdata->bytes += PAGE_SIZE; } rc = adjust_credits(server, &rdata->credits, rdata->bytes); @@ -4470,7 +4386,6 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, add_credits_and_wake_if(server, &rdata->credits, 0); for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; - lru_cache_add_file(page); unlock_page(page); put_page(page); } @@ -4486,9 +4401,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * the pagecache must be uncached before they get returned to the * allocator. */ - cifs_fscache_readpages_cancel(mapping->host, page_list); free_xid(xid); - return rc; } /* @@ -4806,7 +4719,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations cifs_addr_ops = { .readpage = cifs_readpage, - .readpages = cifs_readpages, + .readahead = cifs_readahead, .writepage = cifs_writepage, .writepages = cifs_writepages, .write_begin = cifs_write_begin, From patchwork Mon Jan 13 15:37:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330439 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 600C392A for ; Mon, 13 Jan 2020 15:37:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D27C2081E for ; Mon, 13 Jan 2020 15:37:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="GUwZzwfW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D27C2081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E4C88E0010; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2D1DB8E0012; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 105018E0011; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id E2B928E0003 for ; Mon, 13 Jan 2020 10:37:54 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 99E013ABC for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) X-FDA: 76373016468.02.dock93_7f325570e3d45 X-Spam-Summary: 2,0,0,95cfda12eb8663f1,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3871:3872:3874:4321:4605:5007:6261:6653:7576:8603:8660:8957:9036:9592:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12555:12895:13148:13161:13184:13229:13230:13894:14096:14181:14394:14721:21080:21451:21627:21990:30012:30051:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: dock93_7f325570e3d45 X-Filterd-Recvd-Size: 5072 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=3Ghwi3+WZS3j152mMgg6+inJc2K7ZIIt49m3twmSlAY=; b=GUwZzwfW18Rxxt/p6sEe7794Xu diAQJU6sVNN+5CjgN9cvXnoU/E3om5arT+k3OCsnbYNH0jjAz0+YtLUfN+3yB2XY49lZbCn4aFlMR UhzButmbVeiVOs+DRijkcoDpsIJWmMJe3VL/NmUKOj4EvFpcMyXJpSjSu1T3PF1MBeeJ7c+0f4U7H J+vHpBR2YWmXzNDvHsa9T0wMhTcnDUfUC52wp3D/aTlpMJUgZ87FZg65EpwhwzYiSQxjm3MGRP12H lkPkriIFPlHGAU3t6FCgPdYsX1JoX318DntmIdHNfO2pNhfJdQqimoLFLcWw1QE3sPhAkcCRuIYc1 wZslt8WA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076w-BI; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 7/8] mm: Remove add_to_page_cache_locked Date: Mon, 13 Jan 2020 07:37:45 -0800 Message-Id: <20200113153746.26654-8-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" The only remaining caller is add_to_page_cache(), and the only caller of that is hugetlbfs, so move add_to_page_cache() into filemap.c and call __add_to_page_cache_locked() directly. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 20 ++------------------ mm/filemap.c | 23 ++++++++--------------- 2 files changed, 10 insertions(+), 33 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 37a4d9e32cd3..3ce051fb9c73 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -604,8 +604,8 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size) return 0; } -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); +int add_to_page_cache(struct page *page, struct address_space *mapping, + pgoff_t index, gfp_t gfp); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); extern void delete_from_page_cache(struct page *page); @@ -614,22 +614,6 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask); void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); -/* - * Like add_to_page_cache_locked, but used to add newly allocated pages: - * the page is new, so we can just run __SetPageLocked() against it. - */ -static inline int add_to_page_cache(struct page *page, - struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) -{ - int error; - - __SetPageLocked(page); - error = add_to_page_cache_locked(page, mapping, offset, gfp_mask); - if (unlikely(error)) - __ClearPageLocked(page); - return error; -} - static inline unsigned long dir_pages(struct inode *inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> diff --git a/mm/filemap.c b/mm/filemap.c index bf6aa30be58d..fb87f5fa75e6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -913,25 +913,18 @@ static int __add_to_page_cache_locked(struct page *page, } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); -/** - * add_to_page_cache_locked - add a locked page to the pagecache - * @page: page to add - * @mapping: the page's address_space - * @offset: page index - * @gfp_mask: page allocation mode - * - * This function is used to add a page to the pagecache. It must be locked. - * This function does not add the page to the LRU. The caller must do that. - * - * Return: %0 on success, negative error code otherwise. - */ -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, +int add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + int err; + + __SetPageLocked(page); + err = __add_to_page_cache_locked(page, mapping, offset, gfp_mask, NULL); + if (unlikely(err)) + __ClearPageLocked(page); + return err; } -EXPORT_SYMBOL(add_to_page_cache_locked); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) From patchwork Mon Jan 13 15:37:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330457 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C7E913A0 for ; Mon, 13 Jan 2020 15:38:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B29C222C4 for ; Mon, 13 Jan 2020 15:38:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="HIDZ4epq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B29C222C4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 78B9C8E0013; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6EE938E0011; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DFE88E0013; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 0804F8E0010 for ; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id C5F3140CA for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) X-FDA: 76373016468.26.chain99_7f3a981c0072e X-Spam-Summary: 2,0,0,790aa522d1a3e640,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3872:4117:4321:4605:5007:6261:6653:7576:8603:8957:9036:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:13161:13229:13894:14096:14181:14394:14721:21080:21451:21627:21740:21990:30012:30051:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:37,LUA_SUMMARY:none X-HE-Tag: chain99_7f3a981c0072e X-Filterd-Recvd-Size: 6666 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=fWczM9DLx1FK48u8XSx4af+h330mJlxpghJsvnVaqrA=; b=HIDZ4epq926UxT06NFoqXP0QdF Ur59vo1stJEQAhFpBPj902qub5AA6aZ7tjmo1T7b4Bcn2M748hVk7zbdKY5MFiDnyCP+EYp83n7c1 mIJcpbWNlLa7w7YYPsTor72vLw7U68oj1pYpktn7LtqKhB4sKnJdMw0h3gxGqpUYaK6bWb5mh6yKK 2SMI4g3AZ8CLnrcxV+gcRnOtTaubpDDvHj6O/oiGqptYxwiI2sBHJao24upsbhSZi9mspf5lxIi0T d+vrwlC5bGgBsGaC9TUwvsYkclacLYkvAufbbyeDvAgDj79k2tTURRn3PVsKZN2y7KebhYvSxpSWy j0iUJV7w==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00077G-CN; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 8/8] mm: Unify all add_to_page_cache variants Date: Mon, 13 Jan 2020 07:37:46 -0800 Message-Id: <20200113153746.26654-9-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" We already have various bits of add_to_page_cache() executed conditionally on !PageHuge(page); add the add_to_page_cache_lru() pieces as some more code which isn't executed for huge pages. This lets us remove the old add_to_page_cache() and rename __add_to_page_cache_locked() to add_to_page_cache(). Include a compatibility define so we don't have to change all 20+ callers of add_to_page_cache_lru(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 5 ++-- mm/filemap.c | 65 ++++++++++++----------------------------- 2 files changed, 21 insertions(+), 49 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 3ce051fb9c73..753e8df6a5b1 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -606,14 +606,15 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size) int add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp); -int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); extern void delete_from_page_cache(struct page *page); extern void __delete_from_page_cache(struct page *page, void *shadow); int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask); void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); +#define add_to_page_cache_lru(page, mapping, index, gfp) \ + add_to_page_cache(page, mapping, index, gfp) + static inline unsigned long dir_pages(struct inode *inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> diff --git a/mm/filemap.c b/mm/filemap.c index fb87f5fa75e6..83f45f31a00a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -847,19 +847,18 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) } EXPORT_SYMBOL_GPL(replace_page_cache_page); -static int __add_to_page_cache_locked(struct page *page, - struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask, - void **shadowp) +int add_to_page_cache(struct page *page, struct address_space *mapping, + pgoff_t offset, gfp_t gfp_mask) { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); struct mem_cgroup *memcg; int error; - void *old; + void *old, *shadow = NULL; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); + __SetPageLocked(page); mapping_set_update(&xas, mapping); if (!huge) { @@ -884,8 +883,7 @@ static int __add_to_page_cache_locked(struct page *page, if (xa_is_value(old)) { mapping->nrexceptional--; - if (shadowp) - *shadowp = old; + shadow = old; } mapping->nrpages++; @@ -899,45 +897,8 @@ static int __add_to_page_cache_locked(struct page *page, if (xas_error(&xas)) goto error; - if (!huge) + if (!huge) { mem_cgroup_commit_charge(page, memcg, false, false); - trace_mm_filemap_add_to_page_cache(page); - return 0; -error: - page->mapping = NULL; - /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); - put_page(page); - return xas_error(&xas); -} -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); - -int add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) -{ - int err; - - __SetPageLocked(page); - err = __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, NULL); - if (unlikely(err)) - __ClearPageLocked(page); - return err; -} - -int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) -{ - void *shadow = NULL; - int ret; - - __SetPageLocked(page); - ret = __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, &shadow); - if (unlikely(ret)) - __ClearPageLocked(page); - else { /* * The page might have been evicted from cache only * recently, in which case it should be activated like @@ -951,9 +912,19 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, workingset_refault(page, shadow); lru_cache_add(page); } - return ret; + trace_mm_filemap_add_to_page_cache(page); + return 0; +error: + page->mapping = NULL; + /* Leave page->index set: truncation relies upon it */ + if (!huge) + mem_cgroup_cancel_charge(page, memcg, false); + put_page(page); + __ClearPageLocked(page); + return xas_error(&xas); } -EXPORT_SYMBOL_GPL(add_to_page_cache_lru); +ALLOW_ERROR_INJECTION(add_to_page_cache, ERRNO); +EXPORT_SYMBOL_GPL(add_to_page_cache); #ifdef CONFIG_NUMA struct page *__page_cache_alloc(gfp_t gfp)