From patchwork Mon Jan 13 15:37:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11330465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C47A892A for ; Mon, 13 Jan 2020 15:38:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 845742081E for ; Mon, 13 Jan 2020 15:38:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="pfVJyi71" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 845742081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9FFDA8E0011; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 937F18E0003; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B5198E0014; Mon, 13 Jan 2020 10:37:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id 2F1C48E0011 for ; Mon, 13 Jan 2020 10:37:55 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id D1021824999B for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) X-FDA: 76373016468.17.bee49_7f3dfb6b60c40 X-Spam-Summary: 2,0,0,3af313be90c7ec98,d41d8cd98f00b204,willy@infradead.org,:linux-xfs@vger.kernel.org:linux-fsdevel@vger.kernel.org::willy@infradead.org:jlayton@kernel.org:hch@infradead.org,RULES_HIT:2:41:69:142:355:379:541:800:960:966:973:979:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4050:4120:4250:4321:4383:4385:4395:4605:5007:6119:6261:6653:7576:7903:8784:8957:9010:9108:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13894:14110:14394:21080:21450:21451:21627:21990:30054:30056:30070:30091,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bee49_7f3dfb6b60c40 X-Filterd-Recvd-Size: 9148 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/YqWOqUUs8RSFa3ffU4BQTUKpG67TjfYQGjVpA3xoSU=; b=pfVJyi71Ce4BNn0c+BJ3P99nBw 3hbi7tvkAjeU/zLTNVnpi9PuXEsJYZnwVxGSHUmqLpVT6KVvXPvCzfSSDVHxIHacmX0TZvScsSPyO +i+kbHASx3yTaEKq9VHRsgJbFEt9rC6MA55I0o0mNeVcVk+JIgfp2IF8lwJK9npNDneW/yzlfBeHT vszeYJIXp0r/SBUCz/0K3g7p463EKtIh5ccANObOLM9/8SbAYfI2vFdOQW+Jxf0tqmaTaiA1uGlBM BE8gfdee00kt0INiZMvRXNH5h2AKjAeeeGX3/Eaov56gqocWPn+VIVrKaDLMAxC+wDlLVCSH/U7TT oFCRpCKQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076j-9X; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 6/8] cifs: Convert from readpages to readahead Date: Mon, 13 Jan 2020 07:37:44 -0800 Message-Id: <20200113153746.26654-7-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in CIFS Signed-off-by: Matthew Wilcox (Oracle) --- fs/cifs/file.c | 125 ++++++++----------------------------------------- 1 file changed, 19 insertions(+), 106 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 043288b5c728..816670b501d8 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4280,70 +4280,10 @@ cifs_readpages_copy_into_pages(struct TCP_Server_Info *server, return readpages_fill_pages(server, rdata, iter, iter->count); } -static int -readpages_get_pages(struct address_space *mapping, struct list_head *page_list, - unsigned int rsize, struct list_head *tmplist, - unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) -{ - struct page *page, *tpage; - unsigned int expected_index; - int rc; - gfp_t gfp = readahead_gfp_mask(mapping); - - INIT_LIST_HEAD(tmplist); - - page = lru_to_page(page_list); - - /* - * Lock the page and put it in the cache. Since no one else - * should have access to this page, we're safe to simply set - * PG_locked without checking it first. - */ - __SetPageLocked(page); - rc = add_to_page_cache_locked(page, mapping, - page->index, gfp); - - /* give up if we can't stick it in the cache */ - if (rc) { - __ClearPageLocked(page); - return rc; - } - - /* move first page to the tmplist */ - *offset = (loff_t)page->index << PAGE_SHIFT; - *bytes = PAGE_SIZE; - *nr_pages = 1; - list_move_tail(&page->lru, tmplist); - - /* now try and add more pages onto the request */ - expected_index = page->index + 1; - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { - /* discontinuity ? */ - if (page->index != expected_index) - break; - - /* would this page push the read over the rsize? */ - if (*bytes + PAGE_SIZE > rsize) - break; - - __SetPageLocked(page); - if (add_to_page_cache_locked(page, mapping, page->index, gfp)) { - __ClearPageLocked(page); - break; - } - list_move_tail(&page->lru, tmplist); - (*bytes) += PAGE_SIZE; - expected_index++; - (*nr_pages)++; - } - return rc; -} - -static int cifs_readpages(struct file *file, struct address_space *mapping, - struct list_head *page_list, unsigned num_pages) +static void cifs_readahead(struct file *file, struct address_space *mapping, + struct pagevec *pages, pgoff_t index) { int rc; - struct list_head tmplist; struct cifsFileInfo *open_file = file->private_data; struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file); struct TCP_Server_Info *server; @@ -4358,11 +4298,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * After this point, every page in the list might have PG_fscache set, * so we will need to clean that up off of every page we don't use. */ - rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list, - &num_pages); + rc = -ENOBUFS; if (rc == 0) { free_xid(xid); - return rc; + return; } if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) @@ -4373,24 +4312,12 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rc = 0; server = tlink_tcon(open_file->tlink)->ses->server; - cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", - __func__, file, mapping, num_pages); + cifs_dbg(FYI, "%s: file=%p mapping=%p index=%lu\n", + __func__, file, mapping, index); - /* - * Start with the page at end of list and move it to private - * list. Do the same with any following pages until we hit - * the rsize limit, hit an index discontinuity, or run out of - * pages. Issue the async read and then start the loop again - * until the list is empty. - * - * Note that list order is important. The page_list is in - * the order of declining indexes. When we put the pages in - * the rdata->pages, then we want them in increasing order. - */ - while (!list_empty(page_list)) { - unsigned int i, nr_pages, bytes, rsize; - loff_t offset; - struct page *page, *tpage; + while (pages->first < pages->nr) { + unsigned int i, nr_pages, rsize; + struct page *page; struct cifs_readdata *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -4408,6 +4335,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, if (rc) break; + nr_pages = rsize / PAGE_SIZE; /* * Give up immediately if rsize is too small to read an entire * page. The VFS will fall back to readpage. We should never @@ -4415,36 +4343,23 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * rsize is smaller than a cache page. */ if (unlikely(rsize < PAGE_SIZE)) { - add_credits_and_wake_if(server, credits, 0); - free_xid(xid); - return 0; - } - - rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, - &nr_pages, &offset, &bytes); - if (rc) { add_credits_and_wake_if(server, credits, 0); break; } + if (nr_pages > pagevec_count(pages)) + nr_pages = pagevec_count(pages); + rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) { /* best to give up if we're out of mem */ - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - lru_cache_add_file(page); - unlock_page(page); - put_page(page); - } - rc = -ENOMEM; add_credits_and_wake_if(server, credits, 0); break; } rdata->cfile = cifsFileInfo_get(open_file); rdata->mapping = mapping; - rdata->offset = offset; - rdata->bytes = bytes; + rdata->offset = index; rdata->pid = pid; rdata->pagesz = PAGE_SIZE; rdata->tailsz = PAGE_SIZE; @@ -4452,9 +4367,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->copy_into_pages = cifs_readpages_copy_into_pages; rdata->credits = credits_on_stack; - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - rdata->pages[rdata->nr_pages++] = page; + for (i = 0; i < rdata->nr_pages; i++) { + rdata->pages[rdata->nr_pages++] = pagevec_next(pages); + index++; + rdata->bytes += PAGE_SIZE; } rc = adjust_credits(server, &rdata->credits, rdata->bytes); @@ -4470,7 +4386,6 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, add_credits_and_wake_if(server, &rdata->credits, 0); for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; - lru_cache_add_file(page); unlock_page(page); put_page(page); } @@ -4486,9 +4401,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * the pagecache must be uncached before they get returned to the * allocator. */ - cifs_fscache_readpages_cancel(mapping->host, page_list); free_xid(xid); - return rc; } /* @@ -4806,7 +4719,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) const struct address_space_operations cifs_addr_ops = { .readpage = cifs_readpage, - .readpages = cifs_readpages, + .readahead = cifs_readahead, .writepage = cifs_writepage, .writepages = cifs_writepages, .write_begin = cifs_write_begin,