From patchwork Thu Sep 3 14:08:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753913 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90D9413B1 for ; Thu, 3 Sep 2020 14:57:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C3192072A for ; Thu, 3 Sep 2020 14:57:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="b1QyzfCX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729336AbgICO5y (ORCPT ); Thu, 3 Sep 2020 10:57:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729149AbgICOJp (ORCPT ); Thu, 3 Sep 2020 10:09:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1BB9C0619C1 for ; Thu, 3 Sep 2020 07:08:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kHj2RN10MLCA+ByH4nC1EoFYnrC6ZhmtON9tQxUs7Hs=; b=b1QyzfCXJDcyCQX3PxWVutkVmd KfitQjqw8gekKGAcE00XNQcepq8fnlYmHpfIO2s7jbd9nxYbO6YTPAeWg/cs1tCChA+cjjir+Mpd0 xSeMtrb7L/xgdfkVyIlZgkHenXGys8apqJhScHGcPYy5PXZdb/HQ3rHVVk2sLRpmbvRpErznteCBA zTqhTpw1um/RIrgndohkDEeghF+gSYluc9zDErBzHWL6zapy1OIbXs5u9wSZCiyEC4QmxKfhISElV ELQpd3tdZM9rjM/czXDnz17pVBhfqonJ8O/e2R7yHJS1CygKUbrwny0J7ZoF7JRqqk6NQfYGDrnMG S4GcGDZw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv1-0003hv-Av; Thu, 03 Sep 2020 14:08:51 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers , Song Liu , Matthew Wilcox , Yang Shi , Pankaj Gupta Subject: [PATCH 1/9] Fix khugepaged's request size in collapse_file Date: Thu, 3 Sep 2020 15:08:36 +0100 Message-Id: <20200903140844.14194-2-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: David Howells collapse_file() in khugepaged passes PAGE_SIZE as the number of pages to be read to page_cache_sync_readahead(). The intent was probably to read a single page. Fix it to use the number of pages to the end of the window instead. Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS") Signed-off-by: David Howells Acked-by: Song Liu Reviewed-by: Matthew Wilcox (Oracle) Acked-by: Yang Shi Acked-by: Pankaj Gupta --- mm/khugepaged.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e749e568e1ea..cfa0dba5fd3b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1709,7 +1709,7 @@ static void collapse_file(struct mm_struct *mm, xas_unlock_irq(&xas); page_cache_sync_readahead(mapping, &file->f_ra, file, index, - PAGE_SIZE); + end - index); /* drain pagevecs to help isolate_lru_page() */ lru_add_drain(); page = find_lock_page(mapping, index); From patchwork Thu Sep 3 14:08:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753917 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 93427722 for ; Thu, 3 Sep 2020 14:58:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 77CEE20716 for ; Thu, 3 Sep 2020 14:58:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cSoGBSPb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729175AbgICO56 (ORCPT ); Thu, 3 Sep 2020 10:57:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729140AbgICOJQ (ORCPT ); Thu, 3 Sep 2020 10:09:16 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 741F9C06123D for ; Thu, 3 Sep 2020 07:08:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1TI7kACnKxAs6qyYap9xlbLVdTcx6qiJPDK16lGE/14=; b=cSoGBSPbnuffhKTuR6ByVVm8jh rsZIgH5Vbd342Njwdho5/jI3BDrjbkQK9gKFbJup0oMBn7yWk4Hu04lSvzIerxKjcaBbp1f+HYwmg IX78vx0ftWJsUQN3k60D7C13HFwEWOyzsuPVuTdZf5Fhi6z+y6P8YxMqv7vMFH0EAVD9HsnK3iakQ Cu7HDRiGK46D+VzLQwdirObYxsp2kmv49scHObW4EjTE1HMtWxpYex8hhJfI6MHnR4+4qvcW74DCE 9j1WUzFGIHxCcsR76IyRIPi6hQUoxXxK4lPcbebE3vZs2ArMXmcRJ5wLHuyxO5Cx6id4Ni8/JKI8O gkg6psYA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv1-0003hz-Gs; Thu, 03 Sep 2020 14:08:51 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers Subject: [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD Date: Thu, 3 Sep 2020 15:08:37 +0100 Message-Id: <20200903140844.14194-3-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Allow for a more concise definition of a struct readahead_control. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 7 +++++++ mm/readahead.c | 6 +----- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7de11dcd534d..19bba4360436 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -749,6 +749,13 @@ struct readahead_control { unsigned int _batch_count; }; +#define DEFINE_READAHEAD(rac, f, m, i) \ + struct readahead_control rac = { \ + .file = f, \ + .mapping = m, \ + ._index = i, \ + } + /** * readahead_page - Get the next page to read. * @rac: The current readahead request. diff --git a/mm/readahead.c b/mm/readahead.c index 3c9a8dd7c56c..2126a2754e22 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -179,11 +179,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping, { LIST_HEAD(page_pool); gfp_t gfp_mask = readahead_gfp_mask(mapping); - struct readahead_control rac = { - .mapping = mapping, - .file = file, - ._index = index, - }; + DEFINE_READAHEAD(rac, file, mapping, index); unsigned long i; /* From patchwork Thu Sep 3 14:08:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE76313B1 for ; Thu, 3 Sep 2020 14:57:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C9552072A for ; Thu, 3 Sep 2020 14:57:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="izOi1HE9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729186AbgICO5l (ORCPT ); Thu, 3 Sep 2020 10:57:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729133AbgICOJp (ORCPT ); Thu, 3 Sep 2020 10:09:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3B97C0611E1 for ; Thu, 3 Sep 2020 07:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FAQq0JHC+4hBhk6kiejCxxwB+HjKmiOQ1xHYoMPYFW8=; b=izOi1HE9F1UVZ23XIRcDn1FSVU 9ClxTYLwWTGxrf7nIpjw64ehdbeLRkP6zB2Pd3gTAEgMpkLfsPhRTY/0r9DAJoYQfwV+o0h46dTgh VBvwgPi2cIAe7vH4F39zslZkN4KgKN3LQzQIGaHs2DLlM6Wq4VY0R+SV2vAszklB4sDmMpYXY1KMo HAp3gmNckQieD++cooypXFP9f7JmgVKsbmgiTt5CHeFg5jWveUMWXhnTX1G/WqSIM1TF2e9wN78VB 3BESx4U3hP2Krmcyz1K10gmx081zM017eI33LhuFbOZb0vv9fGklMi/ufZpi2SEMcOsYWcX3kJAKO TH5QjRig==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv1-0003i6-Pg; Thu, 03 Sep 2020 14:08:51 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers Subject: [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Date: Thu, 3 Sep 2020 15:08:38 +0100 Message-Id: <20200903140844.14194-4-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Define it in the callers instead of in page_cache_ra_unbounded(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ext4/verity.c | 4 ++-- fs/f2fs/verity.c | 4 ++-- include/linux/pagemap.h | 5 ++--- mm/readahead.c | 30 ++++++++++++++---------------- 4 files changed, 20 insertions(+), 23 deletions(-) diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c index bbd5e7e0632b..5b7ba8f71153 100644 --- a/fs/ext4/verity.c +++ b/fs/ext4/verity.c @@ -349,6 +349,7 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode, pgoff_t index, unsigned long num_ra_pages) { + DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index); struct page *page; index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT; @@ -358,8 +359,7 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode, if (page) put_page(page); else if (num_ra_pages > 1) - page_cache_readahead_unbounded(inode->i_mapping, NULL, - index, num_ra_pages, 0); + page_cache_ra_unbounded(&ractl, num_ra_pages, 0); page = read_mapping_page(inode->i_mapping, index, NULL); } return page; diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c index 9eb0dba851e8..054ec852b5ea 100644 --- a/fs/f2fs/verity.c +++ b/fs/f2fs/verity.c @@ -228,6 +228,7 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode, pgoff_t index, unsigned long num_ra_pages) { + DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index); struct page *page; index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT; @@ -237,8 +238,7 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode, if (page) put_page(page); else if (num_ra_pages > 1) - page_cache_readahead_unbounded(inode->i_mapping, NULL, - index, num_ra_pages, 0); + page_cache_ra_unbounded(&ractl, num_ra_pages, 0); page = read_mapping_page(inode->i_mapping, index, NULL); } return page; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 19bba4360436..2b613c369a2f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -705,9 +705,8 @@ void page_cache_sync_readahead(struct address_space *, struct file_ra_state *, void page_cache_async_readahead(struct address_space *, struct file_ra_state *, struct file *, struct page *, pgoff_t index, unsigned long req_count); -void page_cache_readahead_unbounded(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_count); +void page_cache_ra_unbounded(struct readahead_control *, + unsigned long nr_to_read, unsigned long lookahead_count); /* * Like add_to_page_cache_locked, but used to add newly allocated pages: diff --git a/mm/readahead.c b/mm/readahead.c index 2126a2754e22..a444943781bb 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -158,10 +158,8 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages, } /** - * page_cache_readahead_unbounded - Start unchecked readahead. - * @mapping: File address space. - * @file: This instance of the open file; used for authentication. - * @index: First page index to read. + * page_cache_ra_unbounded - Start unchecked readahead. + * @ractl: Readahead control. * @nr_to_read: The number of pages to read. * @lookahead_size: Where to start the next readahead. * @@ -173,13 +171,13 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages, * Context: File is referenced by caller. Mutexes may be held by caller. * May sleep, but will not reenter filesystem to reclaim memory. */ -void page_cache_readahead_unbounded(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_size) +void page_cache_ra_unbounded(struct readahead_control *ractl, + unsigned long nr_to_read, unsigned long lookahead_size) { + struct address_space *mapping = ractl->mapping; + unsigned long index = readahead_index(ractl); LIST_HEAD(page_pool); gfp_t gfp_mask = readahead_gfp_mask(mapping); - DEFINE_READAHEAD(rac, file, mapping, index); unsigned long i; /* @@ -200,7 +198,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping, for (i = 0; i < nr_to_read; i++) { struct page *page = xa_load(&mapping->i_pages, index + i); - BUG_ON(index + i != rac._index + rac._nr_pages); + BUG_ON(index + i != ractl->_index + ractl->_nr_pages); if (page && !xa_is_value(page)) { /* @@ -211,7 +209,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping, * have a stable reference to this page, and it's * not worth getting one just for that. */ - read_pages(&rac, &page_pool, true); + read_pages(ractl, &page_pool, true); continue; } @@ -224,12 +222,12 @@ void page_cache_readahead_unbounded(struct address_space *mapping, } else if (add_to_page_cache_lru(page, mapping, index + i, gfp_mask) < 0) { put_page(page); - read_pages(&rac, &page_pool, true); + read_pages(ractl, &page_pool, true); continue; } if (i == nr_to_read - lookahead_size) SetPageReadahead(page); - rac._nr_pages++; + ractl->_nr_pages++; } /* @@ -237,10 +235,10 @@ void page_cache_readahead_unbounded(struct address_space *mapping, * uptodate then the caller will launch readpage again, and * will then handle the error. */ - read_pages(&rac, &page_pool, false); + read_pages(ractl, &page_pool, false); memalloc_nofs_restore(nofs); } -EXPORT_SYMBOL_GPL(page_cache_readahead_unbounded); +EXPORT_SYMBOL_GPL(page_cache_ra_unbounded); /* * __do_page_cache_readahead() actually reads a chunk of disk. It allocates @@ -252,6 +250,7 @@ void __do_page_cache_readahead(struct address_space *mapping, struct file *file, pgoff_t index, unsigned long nr_to_read, unsigned long lookahead_size) { + DEFINE_READAHEAD(ractl, file, mapping, index); struct inode *inode = mapping->host; loff_t isize = i_size_read(inode); pgoff_t end_index; /* The last page we want to read */ @@ -266,8 +265,7 @@ void __do_page_cache_readahead(struct address_space *mapping, if (nr_to_read > end_index - index) nr_to_read = end_index - index + 1; - page_cache_readahead_unbounded(mapping, file, index, nr_to_read, - lookahead_size); + page_cache_ra_unbounded(&ractl, nr_to_read, lookahead_size); } /* From patchwork Thu Sep 3 14:08:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753711 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 391C4166C for ; Thu, 3 Sep 2020 14:09:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 163632054F for ; Thu, 3 Sep 2020 14:09:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rNBaXP6N" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729155AbgICOJp (ORCPT ); Thu, 3 Sep 2020 10:09:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729143AbgICOJQ (ORCPT ); Thu, 3 Sep 2020 10:09:16 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78EEBC06123E for ; Thu, 3 Sep 2020 07:08:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eTwGtgPGcBumRp8Hea4BdT+Mx8Gt3M//B82u6fGGQ6c=; b=rNBaXP6NVLGcexEYZ63yUqlOBo XFjzBn5wmiOhufSy4PfLo9QT3AJuR44gzSHBWGRglvScyPGnYK2lJI6zFUbYqgnDh/k7FxLuVmwTB xV0q4leRzsTDGZG43UidhdRLPAv8wyWgrNDt6R2I5B2WCvgTw1lTW5cVeGce0VlGtXNiFPDKBxCGh /BpV/h4sQ4ONey0o/CY/ynPm7TuCsAz0Na6+evQ3KstXNSNTAWED/yOdU55ORaZsazxOGUksFDWlB MbeSlgfogup5l0+VRvzzYA1dEt9xU9b7U+yH0Z8EIwHzMYP2URZQbjQL06sM8jovvAj/jgliNePnn gSPO2yTg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv2-0003iE-56; Thu, 03 Sep 2020 14:08:52 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers Subject: [PATCH 4/9] mm/readahead: Make do_page_cache_ra take a readahead_control Date: Thu, 3 Sep 2020 15:08:39 +0100 Message-Id: <20200903140844.14194-5-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Rename __do_page_cache_readahead() to do_page_cache_ra() and call it directly from ondemand_readahead() instead of indirecting via ra_submit(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 11 +++++------ mm/readahead.c | 28 +++++++++++++++------------- 2 files changed, 20 insertions(+), 19 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 10c677655912..6aef85f62b9d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -51,18 +51,17 @@ void unmap_page_range(struct mmu_gather *tlb, void force_page_cache_readahead(struct address_space *, struct file *, pgoff_t index, unsigned long nr_to_read); -void __do_page_cache_readahead(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_size); +void do_page_cache_ra(struct readahead_control *, + unsigned long nr_to_read, unsigned long lookahead_size); /* * Submit IO for the read-ahead request in file_ra_state. */ static inline void ra_submit(struct file_ra_state *ra, - struct address_space *mapping, struct file *filp) + struct address_space *mapping, struct file *file) { - __do_page_cache_readahead(mapping, filp, - ra->start, ra->size, ra->async_size); + DEFINE_READAHEAD(ractl, file, mapping, ra->start); + do_page_cache_ra(&ractl, ra->size, ra->async_size); } /** diff --git a/mm/readahead.c b/mm/readahead.c index a444943781bb..577f180d9252 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -241,17 +241,16 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, EXPORT_SYMBOL_GPL(page_cache_ra_unbounded); /* - * __do_page_cache_readahead() actually reads a chunk of disk. It allocates + * do_page_cache_ra() actually reads a chunk of disk. It allocates * the pages first, then submits them for I/O. This avoids the very bad * behaviour which would occur if page allocations are causing VM writeback. * We really don't want to intermingle reads and writes like that. */ -void __do_page_cache_readahead(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read, - unsigned long lookahead_size) +void do_page_cache_ra(struct readahead_control *ractl, + unsigned long nr_to_read, unsigned long lookahead_size) { - DEFINE_READAHEAD(ractl, file, mapping, index); - struct inode *inode = mapping->host; + struct inode *inode = ractl->mapping->host; + unsigned long index = readahead_index(ractl); loff_t isize = i_size_read(inode); pgoff_t end_index; /* The last page we want to read */ @@ -265,7 +264,7 @@ void __do_page_cache_readahead(struct address_space *mapping, if (nr_to_read > end_index - index) nr_to_read = end_index - index + 1; - page_cache_ra_unbounded(&ractl, nr_to_read, lookahead_size); + page_cache_ra_unbounded(ractl, nr_to_read, lookahead_size); } /* @@ -273,10 +272,11 @@ void __do_page_cache_readahead(struct address_space *mapping, * memory at once. */ void force_page_cache_readahead(struct address_space *mapping, - struct file *filp, pgoff_t index, unsigned long nr_to_read) + struct file *file, pgoff_t index, unsigned long nr_to_read) { + DEFINE_READAHEAD(ractl, file, mapping, index); struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &filp->f_ra; + struct file_ra_state *ra = &file->f_ra; unsigned long max_pages; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && @@ -294,7 +294,7 @@ void force_page_cache_readahead(struct address_space *mapping, if (this_chunk > nr_to_read) this_chunk = nr_to_read; - __do_page_cache_readahead(mapping, filp, index, this_chunk, 0); + do_page_cache_ra(&ractl, this_chunk, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -432,10 +432,11 @@ static int try_context_readahead(struct address_space *mapping, * A minimal readahead algorithm for trivial sequential/random reads. */ static void ondemand_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, + struct file_ra_state *ra, struct file *file, bool hit_readahead_marker, pgoff_t index, unsigned long req_size) { + DEFINE_READAHEAD(ractl, file, mapping, index); struct backing_dev_info *bdi = inode_to_bdi(mapping->host); unsigned long max_pages = ra->ra_pages; unsigned long add_pages; @@ -516,7 +517,7 @@ static void ondemand_readahead(struct address_space *mapping, * standalone, small random read * Read as is, and do not pollute the readahead state. */ - __do_page_cache_readahead(mapping, filp, index, req_size, 0); + do_page_cache_ra(&ractl, req_size, 0); return; initial_readahead: @@ -542,7 +543,8 @@ static void ondemand_readahead(struct address_space *mapping, } } - ra_submit(ra, mapping, filp); + ractl._index = ra->start; + do_page_cache_ra(&ractl, ra->size, ra->async_size); } /** From patchwork Thu Sep 3 14:08:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753713 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35006722 for ; Thu, 3 Sep 2020 14:10:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12EA02054F for ; Thu, 3 Sep 2020 14:10:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ieU9MwuL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729143AbgICOJt (ORCPT ); Thu, 3 Sep 2020 10:09:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729142AbgICOJQ (ORCPT ); Thu, 3 Sep 2020 10:09:16 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 790F0C06123F for ; Thu, 3 Sep 2020 07:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ftiQ0rcYjso3GTllhJYCyUIUiwObfxzKShUEAZXZCoo=; b=ieU9MwuLeDT6kI+gVCSDU9856N mgI4UZbxpkI6coKEc6a5cobosI/GkySKoiTTyYYU5fhG6PCageWFZ/yInwoGnuUXYQpd4Q1MdKvlZ XGFIIMEGOedMs1k+b0Z6rNHDNBc87ZoXMi7ATbv/ufSJec3Z0n+gBH7HZHkeBrFA+PKZueVkphkyG DFRKuu9Arfop4cy7yFCP6R92XKYXaZapf/jV+juowfMMX+8PDagK1CGorOmuxGPDM0qn8oDBzsBOQ i+iMbh4TlOl3WCnals4h4ZbwRABHJdfv27Kguf8JbvLd+2RxBWvXf+Hx3ID8t0aLlgVGZQJvn63/9 ntWewkJQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv2-0003iK-CY; Thu, 03 Sep 2020 14:08:52 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers , Matthew Wilcox Subject: [PATCH 5/9] mm/readahead: Make ondemand_readahead take a readahead_control Date: Thu, 3 Sep 2020 15:08:40 +0100 Message-Id: <20200903140844.14194-6-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: David Howells Make ondemand_readahead() take a readahead_control struct in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 577f180d9252..73110c4148f8 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -431,15 +431,14 @@ static int try_context_readahead(struct address_space *mapping, /* * A minimal readahead algorithm for trivial sequential/random reads. */ -static void ondemand_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *file, - bool hit_readahead_marker, pgoff_t index, +static void ondemand_readahead(struct readahead_control *ractl, + struct file_ra_state *ra, bool hit_readahead_marker, unsigned long req_size) { - DEFINE_READAHEAD(ractl, file, mapping, index); - struct backing_dev_info *bdi = inode_to_bdi(mapping->host); + struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host); unsigned long max_pages = ra->ra_pages; unsigned long add_pages; + unsigned long index = readahead_index(ractl); pgoff_t prev_index; /* @@ -477,7 +476,8 @@ static void ondemand_readahead(struct address_space *mapping, pgoff_t start; rcu_read_lock(); - start = page_cache_next_miss(mapping, index + 1, max_pages); + start = page_cache_next_miss(ractl->mapping, index + 1, + max_pages); rcu_read_unlock(); if (!start || start - index > max_pages) @@ -510,14 +510,15 @@ static void ondemand_readahead(struct address_space *mapping, * Query the page cache and look for the traces(cached history pages) * that a sequential stream would leave behind. */ - if (try_context_readahead(mapping, ra, index, req_size, max_pages)) + if (try_context_readahead(ractl->mapping, ra, index, req_size, + max_pages)) goto readit; /* * standalone, small random read * Read as is, and do not pollute the readahead state. */ - do_page_cache_ra(&ractl, req_size, 0); + do_page_cache_ra(ractl, req_size, 0); return; initial_readahead: @@ -543,8 +544,8 @@ static void ondemand_readahead(struct address_space *mapping, } } - ractl._index = ra->start; - do_page_cache_ra(&ractl, ra->size, ra->async_size); + ractl->_index = ra->start; + do_page_cache_ra(ractl, ra->size, ra->async_size); } /** @@ -564,6 +565,8 @@ void page_cache_sync_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, pgoff_t index, unsigned long req_count) { + DEFINE_READAHEAD(ractl, filp, mapping, index); + /* no read-ahead */ if (!ra->ra_pages) return; @@ -578,7 +581,7 @@ void page_cache_sync_readahead(struct address_space *mapping, } /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, false, index, req_count); + ondemand_readahead(&ractl, ra, false, req_count); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); @@ -602,6 +605,8 @@ page_cache_async_readahead(struct address_space *mapping, struct page *page, pgoff_t index, unsigned long req_count) { + DEFINE_READAHEAD(ractl, filp, mapping, index); + /* no read-ahead */ if (!ra->ra_pages) return; @@ -624,7 +629,7 @@ page_cache_async_readahead(struct address_space *mapping, return; /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, true, index, req_count); + ondemand_readahead(&ractl, ra, true, req_count); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Thu Sep 3 14:08:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82383722 for ; Thu, 3 Sep 2020 14:58:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68C4E2072A for ; Thu, 3 Sep 2020 14:58:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="tNrJZj08" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729140AbgICO6A (ORCPT ); Thu, 3 Sep 2020 10:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729144AbgICOJQ (ORCPT ); Thu, 3 Sep 2020 10:09:16 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB9D3C0611E0 for ; Thu, 3 Sep 2020 07:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=s8Y9NHR6VFmBSMmMh1rZLBkl40U5H74KzuWSPSFD8/8=; b=tNrJZj08hu8NN1BUitoYvBZVUF F0tJeTdfRb8qEKvczxSGMbBzXWLauiRYZ2+4qjBR1Img4Cr/mdfIhEPMkhshHgJ+ntt7I3GMZ2+r8 VJQMp2noQkUV0Jjelsy31pouTMtvfu5rthJV3J/AYCcDWzBW50ELLDtAL1AmxG9Wpe51FLG66hP/G 39OtBEs4aU2zgu4wqD3hCLlNSoHDgFsjZxQCgTo8ORHBhIbZW0unGqu60Tl3ibs4svQ4/nAuB5PwC S9tUuhXGPGYgfzKSOZJg1gJ4dhO0vCKJGO+/iBR3T3kSWS6PVTFSyr+1bK/V7EBlPfCPRhUw2eaQA HXyf91aA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv2-0003iT-LD; Thu, 03 Sep 2020 14:08:52 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers , Matthew Wilcox Subject: [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra Date: Thu, 3 Sep 2020 15:08:41 +0100 Message-Id: <20200903140844.14194-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: David Howells Reimplement force_page_cache_readahead() as a wrapper around force_page_cache_ra(). Pass the existing readahead_control from page_cache_sync_readahead(). Signed-off-by: David Howells Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 13 +++++++++---- mm/readahead.c | 18 ++++++++++-------- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 6aef85f62b9d..5533e85bd123 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,10 +49,15 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -void force_page_cache_readahead(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read); -void do_page_cache_ra(struct readahead_control *, - unsigned long nr_to_read, unsigned long lookahead_size); +void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read, + unsigned long lookahead_size); +void force_page_cache_ra(struct readahead_control *, unsigned long nr); +static inline void force_page_cache_readahead(struct address_space *mapping, + struct file *file, pgoff_t index, unsigned long nr_to_read) +{ + DEFINE_READAHEAD(ractl, file, mapping, index); + force_page_cache_ra(&ractl, nr_to_read); +} /* * Submit IO for the read-ahead request in file_ra_state. diff --git a/mm/readahead.c b/mm/readahead.c index 73110c4148f8..3115ced5faae 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -271,13 +271,13 @@ void do_page_cache_ra(struct readahead_control *ractl, * Chunk the readahead into 2 megabyte units, so that we don't pin too much * memory at once. */ -void force_page_cache_readahead(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read) +void force_page_cache_ra(struct readahead_control *ractl, + unsigned long nr_to_read) { - DEFINE_READAHEAD(ractl, file, mapping, index); + struct address_space *mapping = ractl->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &file->f_ra; - unsigned long max_pages; + struct file_ra_state *ra = &ractl->file->f_ra; + unsigned long max_pages, index; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && !mapping->a_ops->readahead)) @@ -287,14 +287,16 @@ void force_page_cache_readahead(struct address_space *mapping, * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size */ + index = readahead_index(ractl); max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); - nr_to_read = min(nr_to_read, max_pages); + nr_to_read = min_t(unsigned long, nr_to_read, max_pages); while (nr_to_read) { unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE; if (this_chunk > nr_to_read) this_chunk = nr_to_read; - do_page_cache_ra(&ractl, this_chunk, 0); + ractl->_index = index; + do_page_cache_ra(ractl, this_chunk, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -576,7 +578,7 @@ void page_cache_sync_readahead(struct address_space *mapping, /* be dumb */ if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(mapping, filp, index, req_count); + force_page_cache_ra(&ractl, req_count); return; } From patchwork Thu Sep 3 14:08:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753915 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0AFB13B1 for ; Thu, 3 Sep 2020 14:57:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9CB692072A for ; Thu, 3 Sep 2020 14:57:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Hxpv/A1Q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729149AbgICO5z (ORCPT ); Thu, 3 Sep 2020 10:57:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729152AbgICOJp (ORCPT ); Thu, 3 Sep 2020 10:09:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B66EDC0611E2 for ; Thu, 3 Sep 2020 07:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YQWDKR2b9vvYoo/rMiI9oBJwndGbzqY3ajRHCZjhecQ=; b=Hxpv/A1QHyX5wyGaacCqKEXdyT 7UBWHqRJT5AAR5FBsHZz40q+jNgy5o8yehAVs53Ma2bTVXzb5Au4w+n2tbeuv2mvLg4BTMGplebrY R72HZ19jf78uv201Y0pO5V3wen70UOdslavfuTheROvj3auSFi4slknH7EDe3VULPJdLE0I+/dzGM A1T+DMNeGemLb/q7BRoCBxDNwPEzG9XUUM3qRws10W33tjTuQ2X8PpI8pDkBWl/zeHVstLUco9mnj QkbdtnsOZNg2Jt8VT9Fqjpi3ZE99q/+ZqrTIpt9UZNvAjb2Sk0dbeiPI2co6SsA0lv6MQcNUaHRaz wUc15O6w==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv2-0003ib-V8; Thu, 03 Sep 2020 14:08:53 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers Subject: [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra Date: Thu, 3 Sep 2020 15:08:42 +0100 Message-Id: <20200903140844.14194-8-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Reimplement page_cache_sync_readahead() and page_cache_async_readahead() as wrappers around versions of the function which take a readahead_control in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 64 ++++++++++++++++++++++++++++++++++------- mm/readahead.c | 58 ++++++++----------------------------- 2 files changed, 66 insertions(+), 56 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2b613c369a2f..12ab56c3a86f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -698,16 +698,6 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask); void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); -#define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) - -void page_cache_sync_readahead(struct address_space *, struct file_ra_state *, - struct file *, pgoff_t index, unsigned long req_count); -void page_cache_async_readahead(struct address_space *, struct file_ra_state *, - struct file *, struct page *, pgoff_t index, - unsigned long req_count); -void page_cache_ra_unbounded(struct readahead_control *, - unsigned long nr_to_read, unsigned long lookahead_count); - /* * Like add_to_page_cache_locked, but used to add newly allocated pages: * the page is new, so we can just run __SetPageLocked() against it. @@ -755,6 +745,60 @@ struct readahead_control { ._index = i, \ } +#define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) + +void page_cache_ra_unbounded(struct readahead_control *, + unsigned long nr_to_read, unsigned long lookahead_count); +void page_cache_sync_ra(struct readahead_control *, struct file_ra_state *, + unsigned long req_count); +void page_cache_async_ra(struct readahead_control *, struct file_ra_state *, + struct page *, unsigned long req_count); + +/** + * page_cache_sync_readahead - generic file readahead + * @mapping: address_space which holds the pagecache and I/O vectors + * @ra: file_ra_state which holds the readahead state + * @file: Used by the filesystem for authentication. + * @index: Index of first page to be read. + * @req_count: Total number of pages being read by the caller. + * + * page_cache_sync_readahead() should be called when a cache miss happened: + * it will submit the read. The readahead logic may decide to piggyback more + * pages onto the read request if access patterns suggest it will improve + * performance. + */ +static inline +void page_cache_sync_readahead(struct address_space *mapping, + struct file_ra_state *ra, struct file *file, pgoff_t index, + unsigned long req_count) +{ + DEFINE_READAHEAD(ractl, file, mapping, index); + page_cache_sync_ra(&ractl, ra, req_count); +} + +/** + * page_cache_async_readahead - file readahead for marked pages + * @mapping: address_space which holds the pagecache and I/O vectors + * @ra: file_ra_state which holds the readahead state + * @file: Used by the filesystem for authentication. + * @page: The page at @index which triggered the readahead call. + * @index: Index of first page to be read. + * @req_count: Total number of pages being read by the caller. + * + * page_cache_async_readahead() should be called when a page is used which + * is marked as PageReadahead; this is a marker to suggest that the application + * has used up enough of the readahead window that we should start pulling in + * more pages. + */ +static inline +void page_cache_async_readahead(struct address_space *mapping, + struct file_ra_state *ra, struct file *file, + struct page *page, pgoff_t index, unsigned long req_count) +{ + DEFINE_READAHEAD(ractl, file, mapping, index); + page_cache_async_ra(&ractl, ra, page, req_count); +} + /** * readahead_page - Get the next page to read. * @rac: The current readahead request. diff --git a/mm/readahead.c b/mm/readahead.c index 3115ced5faae..620ac83f35cc 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -550,25 +550,9 @@ static void ondemand_readahead(struct readahead_control *ractl, do_page_cache_ra(ractl, ra->size, ra->async_size); } -/** - * page_cache_sync_readahead - generic file readahead - * @mapping: address_space which holds the pagecache and I/O vectors - * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() - * @index: Index of first page to be read. - * @req_count: Total number of pages being read by the caller. - * - * page_cache_sync_readahead() should be called when a cache miss happened: - * it will submit the read. The readahead logic may decide to piggyback more - * pages onto the read request if access patterns suggest it will improve - * performance. - */ -void page_cache_sync_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - pgoff_t index, unsigned long req_count) +void page_cache_sync_ra(struct readahead_control *ractl, + struct file_ra_state *ra, unsigned long req_count) { - DEFINE_READAHEAD(ractl, filp, mapping, index); - /* no read-ahead */ if (!ra->ra_pages) return; @@ -577,38 +561,20 @@ void page_cache_sync_readahead(struct address_space *mapping, return; /* be dumb */ - if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_ra(&ractl, req_count); + if (ractl->file && (ractl->file->f_mode & FMODE_RANDOM)) { + force_page_cache_ra(ractl, req_count); return; } /* do read-ahead */ - ondemand_readahead(&ractl, ra, false, req_count); + ondemand_readahead(ractl, ra, false, req_count); } -EXPORT_SYMBOL_GPL(page_cache_sync_readahead); +EXPORT_SYMBOL_GPL(page_cache_sync_ra); -/** - * page_cache_async_readahead - file readahead for marked pages - * @mapping: address_space which holds the pagecache and I/O vectors - * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() - * @page: The page at @index which triggered the readahead call. - * @index: Index of first page to be read. - * @req_count: Total number of pages being read by the caller. - * - * page_cache_async_readahead() should be called when a page is used which - * is marked as PageReadahead; this is a marker to suggest that the application - * has used up enough of the readahead window that we should start pulling in - * more pages. - */ -void -page_cache_async_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - struct page *page, pgoff_t index, - unsigned long req_count) +void page_cache_async_ra(struct readahead_control *ractl, + struct file_ra_state *ra, struct page *page, + unsigned long req_count) { - DEFINE_READAHEAD(ractl, filp, mapping, index); - /* no read-ahead */ if (!ra->ra_pages) return; @@ -624,16 +590,16 @@ page_cache_async_readahead(struct address_space *mapping, /* * Defer asynchronous read-ahead on IO congestion. */ - if (inode_read_congested(mapping->host)) + if (inode_read_congested(ractl->mapping->host)) return; if (blk_cgroup_congested()) return; /* do read-ahead */ - ondemand_readahead(&ractl, ra, true, req_count); + ondemand_readahead(ractl, ra, true, req_count); } -EXPORT_SYMBOL_GPL(page_cache_async_readahead); +EXPORT_SYMBOL_GPL(page_cache_async_ra); ssize_t ksys_readahead(int fd, loff_t offset, size_t count) { From patchwork Thu Sep 3 14:08:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F83813B1 for ; Thu, 3 Sep 2020 14:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8619520716 for ; Thu, 3 Sep 2020 14:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ioXVhWw7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729148AbgICO5s (ORCPT ); Thu, 3 Sep 2020 10:57:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729150AbgICOJp (ORCPT ); Thu, 3 Sep 2020 10:09:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CADCFC0611E9 for ; Thu, 3 Sep 2020 07:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2cBKtztXbapuaDPjC85hkD+JbRvyUkQLpluPy1fPuaE=; b=ioXVhWw7Lh/7jBgBK8c4XzfU4h N4msuXj3b9ryTcdexBSV4qFmNkEhq+SkR4XAv3JPJRh4MarNV0X6hXjB1k1alncu3V2CzAvbe7rAD jdGHEQz4iNhZTnpN3DmI5BHQb0pNk+LjHZsN23EbyN13sTlUuhz0pcn4S1Hbhp2IcGzQcV6HGlJRB XfXzRabXH3WYjWKE2xb4NoGIOgK4CvzU/PVIrla7kquX0WFCrxW7Yxvvi4Vr7InQiwg8dlwNzMOkY Len2NIvWhyj1+Jm8X6TgcncIJc4/PVKB9/nXuODEiPFGlZ+F1S4QY7L2dm4q2m7edl78KSKAA3Rq3 mG+0eBig==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv3-0003ik-7B; Thu, 03 Sep 2020 14:08:53 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers , Matthew Wilcox Subject: [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead Date: Thu, 3 Sep 2020 15:08:43 +0100 Message-Id: <20200903140844.14194-9-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: David Howells Fold ra_submit() into its last remaining user and pass the readahead_control struct to both do_page_cache_ra() and page_cache_sync_ra(). Signed-off-by: David Howells Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 10 +++++----- mm/internal.h | 10 ---------- 2 files changed, 5 insertions(+), 15 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1aaea26556cc..1ad49c33439a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2466,8 +2466,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; struct address_space *mapping = file->f_mapping; + DEFINE_READAHEAD(ractl, file, mapping, vmf->pgoff); struct file *fpin = NULL; - pgoff_t offset = vmf->pgoff; unsigned int mmap_miss; /* If we don't want any read-ahead, don't bother */ @@ -2478,8 +2478,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) if (vmf->vma->vm_flags & VM_SEQ_READ) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); - page_cache_sync_readahead(mapping, ra, file, offset, - ra->ra_pages); + page_cache_sync_ra(&ractl, ra, ra->ra_pages); return fpin; } @@ -2499,10 +2498,11 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * mmap read-around */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - ra->start = max_t(long, 0, offset - ra->ra_pages / 2); + ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2); ra->size = ra->ra_pages; ra->async_size = ra->ra_pages / 4; - ra_submit(ra, mapping, file); + ractl._index = ra->start; + do_page_cache_ra(&ractl, ra->size, ra->async_size); return fpin; } diff --git a/mm/internal.h b/mm/internal.h index 5533e85bd123..0a2e5caea2aa 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -59,16 +59,6 @@ static inline void force_page_cache_readahead(struct address_space *mapping, force_page_cache_ra(&ractl, nr_to_read); } -/* - * Submit IO for the read-ahead request in file_ra_state. - */ -static inline void ra_submit(struct file_ra_state *ra, - struct address_space *mapping, struct file *file) -{ - DEFINE_READAHEAD(ractl, file, mapping, ra->start); - do_page_cache_ra(&ractl, ra->size, ra->async_size); -} - /** * page_evictable - test whether a page is evictable * @page: the page to test From patchwork Thu Sep 3 14:08:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11753715 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A30BF166C for ; Thu, 3 Sep 2020 14:10:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F51420578 for ; Thu, 3 Sep 2020 14:10:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QJX4Yh/n" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728837AbgICOKN (ORCPT ); Thu, 3 Sep 2020 10:10:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729153AbgICOJr (ORCPT ); Thu, 3 Sep 2020 10:09:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FE0EC0619C0 for ; Thu, 3 Sep 2020 07:08:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6to6D0YT64TBlG0mLnZScY0d5oSLUY3YOnbLcSVTPls=; b=QJX4Yh/nIpmVUyVHYgSPW1NBPO M5J0KSRtn3/DYLMCJlY5od8RK/8M/vKtRmDxP4FFdgVsnTdM497OPHMWB4XGkXqzx/k9qdQcIu0uZ JGRTLhvrKX7oAPg0E0CmKWhxaeNYAlS4W60xgo46+6Unfor9+rNDQcgSoM4bjbSw9PeyQU9iDbG9c bjPtp2XaeFFLg25c1bNTxxHpkGh/B9nmdcIkfMsSVV9Vo7rwjAwIxvOOq5f4ds/KUZErWLkTWtmlk vg+4ZX5HV2mfKYU0en04m9loHtygbfqbIhjKkc51PTUqzDVDawGDWEISXwJeeVWRE2Ugwc8ubmOeE CyP8/rXA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDpv3-0003ip-Fv; Thu, 03 Sep 2020 14:08:53 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Eric Biggers , Matthew Wilcox Subject: [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra Date: Thu, 3 Sep 2020 15:08:44 +0100 Message-Id: <20200903140844.14194-10-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200903140844.14194-1-willy@infradead.org> References: <20200903140844.14194-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: David Howells The file_ra_state being passed into page_cache_sync_readahead() was being ignored in favour of using the one embedded in the struct file. The only caller for which this makes a difference is the fsverity code if the file has been marked as POSIX_FADV_RANDOM, but it's confusing and worth fixing. Signed-off-by: David Howells Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 5 +++-- mm/readahead.c | 5 ++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 0a2e5caea2aa..ab4beb7c5cd2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -51,12 +51,13 @@ void unmap_page_range(struct mmu_gather *tlb, void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_size); -void force_page_cache_ra(struct readahead_control *, unsigned long nr); +void force_page_cache_ra(struct readahead_control *, struct file_ra_state *, + unsigned long nr); static inline void force_page_cache_readahead(struct address_space *mapping, struct file *file, pgoff_t index, unsigned long nr_to_read) { DEFINE_READAHEAD(ractl, file, mapping, index); - force_page_cache_ra(&ractl, nr_to_read); + force_page_cache_ra(&ractl, &file->f_ra, nr_to_read); } /** diff --git a/mm/readahead.c b/mm/readahead.c index 620ac83f35cc..c6ffb76827da 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -272,11 +272,10 @@ void do_page_cache_ra(struct readahead_control *ractl, * memory at once. */ void force_page_cache_ra(struct readahead_control *ractl, - unsigned long nr_to_read) + struct file_ra_state *ra, unsigned long nr_to_read) { struct address_space *mapping = ractl->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &ractl->file->f_ra; unsigned long max_pages, index; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && @@ -562,7 +561,7 @@ void page_cache_sync_ra(struct readahead_control *ractl, /* be dumb */ if (ractl->file && (ractl->file->f_mode & FMODE_RANDOM)) { - force_page_cache_ra(ractl, req_count); + force_page_cache_ra(ractl, ra, req_count); return; }