From patchwork Fri Oct 16 02:43:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11840573 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 189EE14B4 for ; Fri, 16 Oct 2020 02:43:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C3978208E4 for ; Fri, 16 Oct 2020 02:43:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="wpM9Hq3V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3978208E4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CE31394000B; Thu, 15 Oct 2020 22:43:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C6B64940007; Thu, 15 Oct 2020 22:43:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B347B94000B; Thu, 15 Oct 2020 22:43:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 86AFB940007 for ; Thu, 15 Oct 2020 22:43:04 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1A4A9180AD807 for ; Fri, 16 Oct 2020 02:43:04 +0000 (UTC) X-FDA: 77376241488.25.jam61_60093e227219 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id E5E0A1804E3A0 for ; Fri, 16 Oct 2020 02:43:03 +0000 (UTC) X-Spam-Summary: 73,5,0,4d445465b159ae4f,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:2:41:69:355:379:800:960:967:973:988:989:1260:1345:1359:1381:1431:1437:1535:1605:1606:1730:1747:1777:1792:2198:2199:2344:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4250:4321:5007:6261:6653:7576:7875:8603:8660:9025:9545:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12986:13148:13230:13846:14096:14799:21080:21324:21433:21451:21627:21939:21990:30045:30054:30070,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yr3pgmmy6qanghym3dzsd4zd8beyc4qrdit7wrtxhidbdpbx6wgtpkox8494m.1o7ujh5jdc7fnqh3tfca4pegsbnh1mtxawnznt63rab6unphxc9h93sxcy9txeg.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SP F:fp,MSB X-HE-Tag: jam61_60093e227219 X-Filterd-Recvd-Size: 6548 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 02:43:03 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3A69120897; Fri, 16 Oct 2020 02:43:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602816182; bh=eAm5TPu+wmxaTME3ZAqpA6QaZlSoIrbuQ3Cb6lhdAz0=; h=Date:From:To:Subject:In-Reply-To:From; b=wpM9Hq3VlFFtICwBNpov7UxC+xR+c/MbPPF6OFQ3U9sPU9z8/h32xArdwUX+oyLrx m4rXgmWz/I903/q4Z/cpoeHlhpnxh0AkKBrZEVakMNjM+dhWPYTFC4P8jHmnYHjUTv vNZ36hWdjh7pxgVugkGhoxqHiq8exye3Cy3XzrXo= Date: Thu, 15 Oct 2020 19:43:01 -0700 From: Andrew Morton To: akpm@linux-foundation.org, dhowells@redhat.com, ebiggers@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 035/156] mm/readahead: make do_page_cache_ra take a readahead_control Message-ID: <20201016024301.QEVvbST1v%akpm@linux-foundation.org> In-Reply-To: <20201015192732.f448da14e9854c7cb7299956@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm/readahead: make do_page_cache_ra take a readahead_control Rename __do_page_cache_readahead() to do_page_cache_ra() and call it directly from ondemand_readahead() instead of indirecting via ra_submit(). Link: https://lkml.kernel.org/r/20200903140844.14194-5-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Cc: David Howells Cc: Eric Biggers Signed-off-by: Andrew Morton --- mm/internal.h | 11 +++++------ mm/readahead.c | 28 +++++++++++++++------------- 2 files changed, 20 insertions(+), 19 deletions(-) --- a/mm/internal.h~mm-readahead-make-do_page_cache_ra-take-a-readahead_control +++ a/mm/internal.h @@ -51,18 +51,17 @@ void unmap_page_range(struct mmu_gather void force_page_cache_readahead(struct address_space *, struct file *, pgoff_t index, unsigned long nr_to_read); -void __do_page_cache_readahead(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_size); +void do_page_cache_ra(struct readahead_control *, + unsigned long nr_to_read, unsigned long lookahead_size); /* * Submit IO for the read-ahead request in file_ra_state. */ static inline void ra_submit(struct file_ra_state *ra, - struct address_space *mapping, struct file *filp) + struct address_space *mapping, struct file *file) { - __do_page_cache_readahead(mapping, filp, - ra->start, ra->size, ra->async_size); + DEFINE_READAHEAD(ractl, file, mapping, ra->start); + do_page_cache_ra(&ractl, ra->size, ra->async_size); } struct page *find_get_entry(struct address_space *mapping, pgoff_t index); --- a/mm/readahead.c~mm-readahead-make-do_page_cache_ra-take-a-readahead_control +++ a/mm/readahead.c @@ -241,17 +241,16 @@ void page_cache_ra_unbounded(struct read EXPORT_SYMBOL_GPL(page_cache_ra_unbounded); /* - * __do_page_cache_readahead() actually reads a chunk of disk. It allocates + * do_page_cache_ra() actually reads a chunk of disk. It allocates * the pages first, then submits them for I/O. This avoids the very bad * behaviour which would occur if page allocations are causing VM writeback. * We really don't want to intermingle reads and writes like that. */ -void __do_page_cache_readahead(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_size) +void do_page_cache_ra(struct readahead_control *ractl, + unsigned long nr_to_read, unsigned long lookahead_size) { - DEFINE_READAHEAD(ractl, file, mapping, index); - struct inode *inode = mapping->host; + struct inode *inode = ractl->mapping->host; + unsigned long index = readahead_index(ractl); loff_t isize = i_size_read(inode); pgoff_t end_index; /* The last page we want to read */ @@ -265,7 +264,7 @@ void __do_page_cache_readahead(struct ad if (nr_to_read > end_index - index) nr_to_read = end_index - index + 1; - page_cache_ra_unbounded(&ractl, nr_to_read, lookahead_size); + page_cache_ra_unbounded(ractl, nr_to_read, lookahead_size); } /* @@ -273,10 +272,11 @@ void __do_page_cache_readahead(struct ad * memory at once. */ void force_page_cache_readahead(struct address_space *mapping, - struct file *filp, pgoff_t index, unsigned long nr_to_read) + struct file *file, pgoff_t index, unsigned long nr_to_read) { + DEFINE_READAHEAD(ractl, file, mapping, index); struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &filp->f_ra; + struct file_ra_state *ra = &file->f_ra; unsigned long max_pages; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && @@ -294,7 +294,7 @@ void force_page_cache_readahead(struct a if (this_chunk > nr_to_read) this_chunk = nr_to_read; - __do_page_cache_readahead(mapping, filp, index, this_chunk, 0); + do_page_cache_ra(&ractl, this_chunk, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -432,10 +432,11 @@ static int try_context_readahead(struct * A minimal readahead algorithm for trivial sequential/random reads. */ static void ondemand_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, + struct file_ra_state *ra, struct file *file, bool hit_readahead_marker, pgoff_t index, unsigned long req_size) { + DEFINE_READAHEAD(ractl, file, mapping, index); struct backing_dev_info *bdi = inode_to_bdi(mapping->host); unsigned long max_pages = ra->ra_pages; unsigned long add_pages; @@ -516,7 +517,7 @@ static void ondemand_readahead(struct ad * standalone, small random read * Read as is, and do not pollute the readahead state. */ - __do_page_cache_readahead(mapping, filp, index, req_size, 0); + do_page_cache_ra(&ractl, req_size, 0); return; initial_readahead: @@ -542,7 +543,8 @@ readit: } } - ra_submit(ra, mapping, filp); + ractl._index = ra->start; + do_page_cache_ra(&ractl, ra->size, ra->async_size); } /**