From patchwork Mon Jan 11 22:07:52 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin LaHaise X-Patchwork-Id: 8012361 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 02CCABEEE5 for ; Mon, 11 Jan 2016 22:08:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 13B0A201D3 for ; Mon, 11 Jan 2016 22:08:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 26CCC201C7 for ; Mon, 11 Jan 2016 22:08:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761292AbcAKWHz (ORCPT ); Mon, 11 Jan 2016 17:07:55 -0500 Received: from kanga.kvack.org ([205.233.56.17]:53375 "EHLO kanga.kvack.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761252AbcAKWHx (ORCPT ); Mon, 11 Jan 2016 17:07:53 -0500 Received: by kanga.kvack.org (Postfix, from userid 63042) id 24D6982919; Mon, 11 Jan 2016 17:07:52 -0500 (EST) Date: Mon, 11 Jan 2016 17:07:52 -0500 From: Benjamin LaHaise To: linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, linux-mm@kvack.org Cc: Alexander Viro , Andrew Morton , Linus Torvalds Subject: [PATCH 11/13] mm: enable __do_page_cache_readahead() to include present pages Message-ID: <7b76f5442bab13114bbb75c3143e1ccc5f17de98.1452549431.git.bcrl@kvack.org> References: Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.2i Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For the upcoming AIO readahead operation it is necessary to know that all the pages in a readahead request have had reads issued for them or that the read was satisfied from cache. Add a parameter to __do_page_cache_readahead() to instruct it to count these pages in the return value. Signed-off-by: Benjamin LaHaise Signed-off-by: Benjamin LaHaise --- mm/internal.h | 4 ++-- mm/readahead.c | 13 +++++++++---- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 38e24b8..7599068 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -43,7 +43,7 @@ static inline void set_page_count(struct page *page, int v) extern int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, - unsigned long lookahead_size); + unsigned long lookahead_size, int report_present); /* * Submit IO for the read-ahead request in file_ra_state. @@ -52,7 +52,7 @@ static inline unsigned long ra_submit(struct file_ra_state *ra, struct address_space *mapping, struct file *filp) { return __do_page_cache_readahead(mapping, filp, - ra->start, ra->size, ra->async_size); + ra->start, ra->size, ra->async_size, 0); } /* diff --git a/mm/readahead.c b/mm/readahead.c index ba22d7f..afd3abe 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -151,12 +151,13 @@ out: */ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, - unsigned long lookahead_size) + unsigned long lookahead_size, int report_present) { struct inode *inode = mapping->host; struct page *page; unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); + int present = 0; int page_idx; int ret = 0; loff_t isize = i_size_read(inode); @@ -178,8 +179,10 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, rcu_read_lock(); page = radix_tree_lookup(&mapping->page_tree, page_offset); rcu_read_unlock(); - if (page && !radix_tree_exceptional_entry(page)) + if (page && !radix_tree_exceptional_entry(page)) { + present++; continue; + } page = page_cache_alloc_readahead(mapping); if (!page) @@ -199,6 +202,8 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, if (ret) read_pages(mapping, filp, &page_pool, ret); BUG_ON(!list_empty(&page_pool)); + if (report_present) + ret += present; out: return ret; } @@ -222,7 +227,7 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp, if (this_chunk > nr_to_read) this_chunk = nr_to_read; err = __do_page_cache_readahead(mapping, filp, - offset, this_chunk, 0); + offset, this_chunk, 0, 0); if (err < 0) return err; @@ -441,7 +446,7 @@ ondemand_readahead(struct address_space *mapping, * standalone, small random read * Read as is, and do not pollute the readahead state. */ - return __do_page_cache_readahead(mapping, filp, offset, req_size, 0); + return __do_page_cache_readahead(mapping, filp, offset, req_size, 0, 0); initial_readahead: ra->start = offset;