From patchwork Wed Sep 2 15:44:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751065 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3639E138A for ; Wed, 2 Sep 2020 15:46:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19B6B208B3 for ; Wed, 2 Sep 2020 15:46:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L88L8NIJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728337AbgIBPoh (ORCPT ); Wed, 2 Sep 2020 11:44:37 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:50779 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727980AbgIBPoc (ORCPT ); Wed, 2 Sep 2020 11:44:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061471; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xUmKgg4xwWqmh+xeFrCeVaIW4I5DLAbupfKnbPLRKZg=; b=L88L8NIJtjYJ1IlTPBb5ZfrmjPRiXjnWXZfy0i9APa4m5YzmJ8PoiE9Aath7yvMGeMfqi7 I2JU0SVpm7rdZyaA4M9lURfeYOdld4rgRqMLVfmQ4b1srbS5N59zKx85bTHAQ254FtthZJ y36R8ZKn78/oY5vWniWuGujHaqzCtS0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-160-2AQ8V7JJM1KQ-wGLWT8C-A-1; Wed, 02 Sep 2020 11:44:29 -0400 X-MC-Unique: 2AQ8V7JJM1KQ-wGLWT8C-A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D44BC18C5200; Wed, 2 Sep 2020 15:44:25 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id C795B5C1D7; Wed, 2 Sep 2020 15:44:24 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 1/6] Fix khugepaged's request size in collapse_file() [ver #2] From: David Howells To: willy@infradead.org Cc: Song Liu , dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:44:24 +0100 Message-ID: <159906146405.663183.8327943081419924909.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org collapse_file() in khugepaged passes PAGE_SIZE as the number of pages to be read ahead to page_cache_sync_readahead(). It seems this was expressed as a number of bytes rather than a number of pages. Fix it to use the number of pages to the end of the window instead. Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS") Signed-off-by: David Howells Acked-by: Song Liu Reviewed-by: Matthew Wilcox (Oracle) --- mm/khugepaged.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6d199c353281..f2d243077b74 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1706,7 +1706,7 @@ static void collapse_file(struct mm_struct *mm, xas_unlock_irq(&xas); page_cache_sync_readahead(mapping, &file->f_ra, file, index, - PAGE_SIZE); + end - index); /* drain pagevecs to help isolate_lru_page() */ lru_add_drain(); page = find_lock_page(mapping, index); From patchwork Wed Sep 2 15:44:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751039 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F7C713B1 for ; Wed, 2 Sep 2020 15:44:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6797C208B3 for ; Wed, 2 Sep 2020 15:44:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Qe9sRCf3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728324AbgIBPok (ORCPT ); Wed, 2 Sep 2020 11:44:40 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:33480 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728353AbgIBPoj (ORCPT ); Wed, 2 Sep 2020 11:44:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061475; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V4CHpzZV3WxhXnFWX3WjxwezsCZ0gw1JjzH53agFM0E=; b=Qe9sRCf3idTnVBSht/2gsOWWZnJF4yIcpP7dG3laP39/al2EpK81mTSM+pFyRVR8jZhhNH PtrhLoWI7RXenjyeGuMXkTgNHye9aQRS62a0HxAN3U5oq/5IlwHBgTvJZJ3cBX/W6hDic2 hSkbotGLEnU7WxF/vvpznDQtUyU3IMY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-497-oroO2E7PN_GR3NltQaaFEw-1; Wed, 02 Sep 2020 11:44:34 -0400 X-MC-Unique: oroO2E7PN_GR3NltQaaFEw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D0F11107B102; Wed, 2 Sep 2020 15:44:32 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id C9F1D19C59; Wed, 2 Sep 2020 15:44:31 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 2/6] mm: Make ondemand_readahead() take a readahead_control struct [ver #2] From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:44:31 +0100 Message-ID: <159906147106.663183.11426662588034129469.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Make ondemand_readahead() take a readahead_control struct in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- mm/readahead.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 91859e6e2b7d..e3e3419dfe3d 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -511,14 +511,14 @@ static bool page_cache_readahead_order(struct readahead_control *rac, /* * A minimal readahead algorithm for trivial sequential/random reads. */ -static void ondemand_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *file, - struct page *page, pgoff_t index, unsigned long req_size) +static void ondemand_readahead(struct readahead_control *rac, + struct file_ra_state *ra, + struct page *page, unsigned long req_size) { - DEFINE_READAHEAD(rac, file, mapping, index); - struct backing_dev_info *bdi = inode_to_bdi(mapping->host); + struct backing_dev_info *bdi = inode_to_bdi(rac->mapping->host); unsigned long max_pages = ra->ra_pages; unsigned long add_pages; + unsigned long index = rac->_index; pgoff_t prev_index; /* @@ -556,7 +556,7 @@ static void ondemand_readahead(struct address_space *mapping, pgoff_t start; rcu_read_lock(); - start = page_cache_next_miss(mapping, index + 1, max_pages); + start = page_cache_next_miss(rac->mapping, index + 1, max_pages); rcu_read_unlock(); if (!start || start - index > max_pages) @@ -589,14 +589,14 @@ static void ondemand_readahead(struct address_space *mapping, * Query the page cache and look for the traces(cached history pages) * that a sequential stream would leave behind. */ - if (try_context_readahead(mapping, ra, index, req_size, max_pages)) + if (try_context_readahead(rac->mapping, ra, index, req_size, max_pages)) goto readit; /* * standalone, small random read * Read as is, and do not pollute the readahead state. */ - __do_page_cache_readahead(&rac, req_size, 0); + __do_page_cache_readahead(rac, req_size, 0); return; initial_readahead: @@ -622,10 +622,10 @@ static void ondemand_readahead(struct address_space *mapping, } } - rac._index = ra->start; - if (page && page_cache_readahead_order(&rac, ra, thp_order(page))) + rac->_index = ra->start; + if (page && page_cache_readahead_order(rac, ra, thp_order(page))) return; - __do_page_cache_readahead(&rac, ra->size, ra->async_size); + __do_page_cache_readahead(rac, ra->size, ra->async_size); } /** @@ -645,6 +645,8 @@ void page_cache_sync_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, pgoff_t index, unsigned long req_count) { + DEFINE_READAHEAD(rac, filp, mapping, index); + /* no read-ahead */ if (!ra->ra_pages) return; @@ -659,7 +661,7 @@ void page_cache_sync_readahead(struct address_space *mapping, } /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, NULL, index, req_count); + ondemand_readahead(&rac, ra, NULL, req_count); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); @@ -683,7 +685,9 @@ page_cache_async_readahead(struct address_space *mapping, struct page *page, pgoff_t index, unsigned long req_count) { - /* no read-ahead */ + DEFINE_READAHEAD(rac, filp, mapping, index); + + /* No Read-ahead */ if (!ra->ra_pages) return; @@ -705,7 +709,7 @@ page_cache_async_readahead(struct address_space *mapping, return; /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, page, index, req_count); + ondemand_readahead(&rac, ra, page, req_count); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Wed Sep 2 15:44:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751043 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC8F313B1 for ; Wed, 2 Sep 2020 15:44:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B430320639 for ; Wed, 2 Sep 2020 15:44:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OZCfuR5h" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728419AbgIBPor (ORCPT ); Wed, 2 Sep 2020 11:44:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:26573 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728274AbgIBPoo (ORCPT ); Wed, 2 Sep 2020 11:44:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IeD2eb+VWg5M7oUOKQXGVzViS6gT7J8CCXVfRF+579k=; b=OZCfuR5hHwjKjvYOu918RLihhqkIHLU+PkGkZWUs5Vn98MAue51UxWDdWhdWMZBwd/y3iI l1ZVgkJtWxlcowJIMz91Lagowg9tHyirtAxK/+19bjPN0M3Cv7D/0V7CguZVeEAp7ORpaT 0BMqmGtlCcIyfXiQTcCzJif7TZs8ysM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-460-gjU1a_hpMMef0Lw4gx0kRQ-1; Wed, 02 Sep 2020 11:44:41 -0400 X-MC-Unique: gjU1a_hpMMef0Lw4gx0kRQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED54610ABDB1; Wed, 2 Sep 2020 15:44:39 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id D77025D9CC; Wed, 2 Sep 2020 15:44:38 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 3/6] mm: Push readahead_control down into force_page_cache_readahead() [ver #2] From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:44:38 +0100 Message-ID: <159906147806.663183.767620073654469472.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Push readahead_control down into force_page_cache_readahead() from its callers in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- mm/fadvise.c | 5 ++++- mm/internal.h | 3 +-- mm/readahead.c | 19 +++++++++++-------- 3 files changed, 16 insertions(+), 11 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index 0e66f2aaeea3..997f7c16690a 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -104,7 +104,10 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) if (!nrpages) nrpages = ~0UL; - force_page_cache_readahead(mapping, file, start_index, nrpages); + { + DEFINE_READAHEAD(rac, file, mapping, start_index); + force_page_cache_readahead(&rac, nrpages); + } break; case POSIX_FADV_NOREUSE: break; diff --git a/mm/internal.h b/mm/internal.h index bf2bee6c42a1..c8ccf208f524 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,8 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -void force_page_cache_readahead(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read); +void force_page_cache_readahead(struct readahead_control *, unsigned long); void __do_page_cache_readahead(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_size); diff --git a/mm/readahead.c b/mm/readahead.c index e3e3419dfe3d..366357e6e845 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -271,13 +271,13 @@ void __do_page_cache_readahead(struct readahead_control *rac, * Chunk the readahead into 2 megabyte units, so that we don't pin too much * memory at once. */ -void force_page_cache_readahead(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read) +void force_page_cache_readahead(struct readahead_control *rac, + unsigned long nr_to_read) { - DEFINE_READAHEAD(rac, file, mapping, index); + struct address_space *mapping = rac->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &file->f_ra; - unsigned long max_pages; + struct file_ra_state *ra = &rac->file->f_ra; + unsigned long max_pages, index; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && !mapping->a_ops->readahead)) @@ -287,14 +287,17 @@ void force_page_cache_readahead(struct address_space *mapping, * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size */ + index = readahead_index(rac); max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); - nr_to_read = min(nr_to_read, max_pages); + nr_to_read = min_t(unsigned long, nr_to_read, max_pages); while (nr_to_read) { unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE; if (this_chunk > nr_to_read) this_chunk = nr_to_read; - __do_page_cache_readahead(&rac, this_chunk, 0); + + rac->_index = index; + __do_page_cache_readahead(rac, this_chunk, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -656,7 +659,7 @@ void page_cache_sync_readahead(struct address_space *mapping, /* be dumb */ if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(mapping, filp, index, req_count); + force_page_cache_readahead(&rac, req_count); return; } From patchwork Wed Sep 2 15:44:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751049 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0D6513B1 for ; Wed, 2 Sep 2020 15:45:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BED65208B3 for ; Wed, 2 Sep 2020 15:45:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LzfEj/JN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726678AbgIBPpA (ORCPT ); Wed, 2 Sep 2020 11:45:00 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:27177 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728110AbgIBPow (ORCPT ); Wed, 2 Sep 2020 11:44:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061490; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CuGXxadKsVJrnI33TmIgfuMt1tBEdqBVzVHBekaxae8=; b=LzfEj/JNdA1Vy1T+EUBoxbcSA5m81fi1sEuXY7qXyKrlxqSXeBeno3CXZTC/VZB8Dg8bDn w6llmVsZABIVcjcs7N/8WJPFOcWmyicMsiZ1ZBoE59QekbqtrNUuC5UKrlbHUMWunN0wE5 O3HlsIS0o4Wo05YPVPu0ztdnKlQo2z4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-308-RsjFokBQMwqjHeFshtGa5A-1; Wed, 02 Sep 2020 11:44:49 -0400 X-MC-Unique: RsjFokBQMwqjHeFshtGa5A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 08E58801AE2; Wed, 2 Sep 2020 15:44:48 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11A5A1002D6C; Wed, 2 Sep 2020 15:44:45 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 4/6] mm: Pass readahead_control into page_cache_{sync,async}_readahead() [ver #2] From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:44:45 +0100 Message-ID: <159906148519.663183.14012026331551396649.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Pass struct readahead_control into the page_cache_{sync,async}_readahead() functions in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- fs/btrfs/free-space-cache.c | 4 +++- fs/btrfs/ioctl.c | 9 ++++++--- fs/btrfs/relocation.c | 10 ++++++---- fs/btrfs/send.c | 16 ++++++++++------ fs/ext4/dir.c | 11 ++++++----- fs/f2fs/dir.c | 8 ++++++-- include/linux/pagemap.h | 7 +++---- mm/filemap.c | 26 ++++++++++++++------------ mm/khugepaged.c | 4 ++-- mm/readahead.c | 34 +++++++++++++--------------------- 10 files changed, 69 insertions(+), 60 deletions(-) diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index dc82fd0c80cb..c64af32453b6 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -288,6 +288,8 @@ static void readahead_cache(struct inode *inode) struct file_ra_state *ra; unsigned long last_index; + DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0); + ra = kzalloc(sizeof(*ra), GFP_NOFS); if (!ra) return; @@ -295,7 +297,7 @@ static void readahead_cache(struct inode *inode) file_ra_state_init(ra, inode->i_mapping); last_index = (i_size_read(inode) - 1) >> PAGE_SHIFT; - page_cache_sync_readahead(inode->i_mapping, ra, NULL, 0, last_index); + page_cache_sync_readahead(&rac, ra, last_index); kfree(ra); } diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index bd3511c5ca81..9f9321f20615 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1428,6 +1428,8 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, struct page **pages = NULL; bool do_compress = range->flags & BTRFS_DEFRAG_RANGE_COMPRESS; + DEFINE_READAHEAD(rac, file, inode->i_mapping, 0); + if (isize == 0) return 0; @@ -1534,9 +1536,10 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, if (i + cluster > ra_index) { ra_index = max(i, ra_index); - if (ra) - page_cache_sync_readahead(inode->i_mapping, ra, - file, ra_index, cluster); + if (ra) { + rac._index = ra_index; + page_cache_sync_readahead(&rac, ra, cluster); + } ra_index += cluster; } diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 4ba1ab9cc76d..3d21aeaaa762 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2684,6 +2684,8 @@ static int relocate_file_extent_cluster(struct inode *inode, int nr = 0; int ret = 0; + DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0); + if (!cluster->nr) return 0; @@ -2712,8 +2714,8 @@ static int relocate_file_extent_cluster(struct inode *inode, page = find_lock_page(inode->i_mapping, index); if (!page) { - page_cache_sync_readahead(inode->i_mapping, - ra, NULL, index, + rac._index = index; + page_cache_sync_readahead(&rac, ra, last_index + 1 - index); page = find_or_create_page(inode->i_mapping, index, mask); @@ -2728,8 +2730,8 @@ static int relocate_file_extent_cluster(struct inode *inode, } if (PageReadahead(page)) { - page_cache_async_readahead(inode->i_mapping, - ra, NULL, page, index, + rac._index = index; + page_cache_async_readahead(&rac, ra, page, last_index + 1 - index); } diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d9813a5b075a..f41391fc4230 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4811,6 +4811,8 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) unsigned pg_offset = offset_in_page(offset); ssize_t ret = 0; + DEFINE_READAHEAD(rac, NULL, NULL, 0); + inode = btrfs_iget(fs_info->sb, sctx->cur_ino, root); if (IS_ERR(inode)) return PTR_ERR(inode); @@ -4829,15 +4831,18 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) /* initial readahead */ memset(&sctx->ra, 0, sizeof(struct file_ra_state)); file_ra_state_init(&sctx->ra, inode->i_mapping); + rac.mapping = inode->i_mapping; while (index <= last_index) { unsigned cur_len = min_t(unsigned, len, PAGE_SIZE - pg_offset); + rac._index = index; + page = find_lock_page(inode->i_mapping, index); if (!page) { - page_cache_sync_readahead(inode->i_mapping, &sctx->ra, - NULL, index, last_index + 1 - index); + page_cache_sync_readahead(&rac, &sctx->ra, + last_index + 1 - index); page = find_or_create_page(inode->i_mapping, index, GFP_KERNEL); @@ -4847,10 +4852,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) } } - if (PageReadahead(page)) { - page_cache_async_readahead(inode->i_mapping, &sctx->ra, - NULL, page, index, last_index + 1 - index); - } + if (PageReadahead(page)) + page_cache_async_readahead(&rac, &sctx->ra, page, + last_index + 1 - index); if (!PageUptodate(page)) { btrfs_readpage(NULL, page); diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c index 1d82336b1cd4..9fca0de50e0f 100644 --- a/fs/ext4/dir.c +++ b/fs/ext4/dir.c @@ -118,6 +118,8 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) struct buffer_head *bh = NULL; struct fscrypt_str fstr = FSTR_INIT(NULL, 0); + DEFINE_READAHEAD(rac, file, sb->s_bdev->bd_inode->i_mapping, 0); + if (IS_ENCRYPTED(inode)) { err = fscrypt_get_encryption_info(inode); if (err) @@ -176,11 +178,10 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) if (err > 0) { pgoff_t index = map.m_pblk >> (PAGE_SHIFT - inode->i_blkbits); - if (!ra_has_index(&file->f_ra, index)) - page_cache_sync_readahead( - sb->s_bdev->bd_inode->i_mapping, - &file->f_ra, file, - index, 1); + if (!ra_has_index(&file->f_ra, index)) { + rac._index = index; + page_cache_sync_readahead(&rac, &file->f_ra, 1); + } file->f_ra.prev_pos = (loff_t)index << PAGE_SHIFT; bh = ext4_bread(NULL, inode, map.m_lblk, 0); if (IS_ERR(bh)) { diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 069f498af1e3..69a316e7808d 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -1027,6 +1027,8 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) struct fscrypt_str fstr = FSTR_INIT(NULL, 0); int err = 0; + DEFINE_READAHEAD(rac, file, inode->i_mapping, 0); + if (IS_ENCRYPTED(inode)) { err = fscrypt_get_encryption_info(inode); if (err) @@ -1052,9 +1054,11 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) cond_resched(); /* readahead for multi pages of dir */ - if (npages - n > 1 && !ra_has_index(ra, n)) - page_cache_sync_readahead(inode->i_mapping, ra, file, n, + if (npages - n > 1 && !ra_has_index(ra, n)) { + rac._index = n; + page_cache_sync_readahead(&rac, ra, min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES)); + } dentry_page = f2fs_find_data_page(inode, n); if (IS_ERR(dentry_page)) { diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8bf048a76c43..3c362ddfeb4d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -769,11 +769,10 @@ void delete_from_page_cache_batch(struct address_space *mapping, #define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) -void page_cache_sync_readahead(struct address_space *, struct file_ra_state *, - struct file *, pgoff_t index, unsigned long req_count); -void page_cache_async_readahead(struct address_space *, struct file_ra_state *, - struct file *, struct page *, pgoff_t index, +void page_cache_sync_readahead(struct readahead_control *, struct file_ra_state *, unsigned long req_count); +void page_cache_async_readahead(struct readahead_control *, struct file_ra_state *, + struct page *, unsigned long req_count); void page_cache_readahead_unbounded(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_count); diff --git a/mm/filemap.c b/mm/filemap.c index 82b97cf4306c..fdfeedd1eb71 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2070,6 +2070,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, unsigned int prev_offset; int error = 0; + DEFINE_READAHEAD(rac, filp, mapping, 0); + if (unlikely(*ppos >= inode->i_sb->s_maxbytes)) return 0; iov_iter_truncate(iter, inode->i_sb->s_maxbytes); @@ -2097,9 +2099,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, if (!page) { if (iocb->ki_flags & IOCB_NOIO) goto would_block; - page_cache_sync_readahead(mapping, - ra, filp, - index, last_index - index); + rac._index = index; + page_cache_sync_readahead(&rac, ra, last_index - index); page = find_get_page(mapping, index); if (unlikely(page == NULL)) goto no_cached_page; @@ -2109,9 +2110,9 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, put_page(page); goto out; } - page_cache_async_readahead(mapping, - ra, filp, thp_head(page), - index, last_index - index); + rac._index = index; + page_cache_async_readahead(&rac, ra, thp_head(page), + last_index - index); } if (!PageUptodate(page)) { /* @@ -2469,6 +2470,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) pgoff_t offset = vmf->pgoff; unsigned int mmap_miss; + DEFINE_READAHEAD(rac, file, mapping, offset); + /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ) return fpin; @@ -2477,8 +2480,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) if (vmf->vma->vm_flags & VM_SEQ_READ) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); - page_cache_sync_readahead(mapping, ra, file, offset, - ra->ra_pages); + page_cache_sync_readahead(&rac, ra, ra->ra_pages); return fpin; } @@ -2515,10 +2517,10 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, { struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; - struct address_space *mapping = file->f_mapping; struct file *fpin = NULL; unsigned int mmap_miss; - pgoff_t offset = vmf->pgoff; + + DEFINE_READAHEAD(rac, file, file->f_mapping, vmf->pgoff); /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) @@ -2528,8 +2530,8 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, WRITE_ONCE(ra->mmap_miss, --mmap_miss); if (PageReadahead(thp_head(page))) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); - page_cache_async_readahead(mapping, ra, file, - thp_head(page), offset, ra->ra_pages); + page_cache_async_readahead(&rac, ra, thp_head(page), + ra->ra_pages); } return fpin; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f2d243077b74..84305574b36d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1703,9 +1703,9 @@ static void collapse_file(struct mm_struct *mm, } } else { /* !is_shmem */ if (!page || xa_is_value(page)) { + DEFINE_READAHEAD(rac, file, mapping, index); xas_unlock_irq(&xas); - page_cache_sync_readahead(mapping, &file->f_ra, - file, index, + page_cache_sync_readahead(&rac, &file->f_ra, end - index); /* drain pagevecs to help isolate_lru_page() */ lru_add_drain(); diff --git a/mm/readahead.c b/mm/readahead.c index 366357e6e845..d8e3e59e4c46 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -633,10 +633,8 @@ static void ondemand_readahead(struct readahead_control *rac, /** * page_cache_sync_readahead - generic file readahead - * @mapping: address_space which holds the pagecache and I/O vectors + * @rac: Readahead control. * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() - * @index: Index of first page to be read. * @req_count: Total number of pages being read by the caller. * * page_cache_sync_readahead() should be called when a cache miss happened: @@ -644,12 +642,10 @@ static void ondemand_readahead(struct readahead_control *rac, * pages onto the read request if access patterns suggest it will improve * performance. */ -void page_cache_sync_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - pgoff_t index, unsigned long req_count) +void page_cache_sync_readahead(struct readahead_control *rac, + struct file_ra_state *ra, + unsigned long req_count) { - DEFINE_READAHEAD(rac, filp, mapping, index); - /* no read-ahead */ if (!ra->ra_pages) return; @@ -658,23 +654,21 @@ void page_cache_sync_readahead(struct address_space *mapping, return; /* be dumb */ - if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(&rac, req_count); + if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) { + force_page_cache_readahead(rac, req_count); return; } /* do read-ahead */ - ondemand_readahead(&rac, ra, NULL, req_count); + ondemand_readahead(rac, ra, NULL, req_count); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); /** * page_cache_async_readahead - file readahead for marked pages - * @mapping: address_space which holds the pagecache and I/O vectors + * @rac: Readahead control. * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() * @page: The page at @index which triggered the readahead call. - * @index: Index of first page to be read. * @req_count: Total number of pages being read by the caller. * * page_cache_async_readahead() should be called when a page is used which @@ -683,13 +677,11 @@ EXPORT_SYMBOL_GPL(page_cache_sync_readahead); * more pages. */ void -page_cache_async_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - struct page *page, pgoff_t index, +page_cache_async_readahead(struct readahead_control *rac, + struct file_ra_state *ra, + struct page *page, unsigned long req_count) { - DEFINE_READAHEAD(rac, filp, mapping, index); - /* No Read-ahead */ if (!ra->ra_pages) return; @@ -705,14 +697,14 @@ page_cache_async_readahead(struct address_space *mapping, /* * Defer asynchronous read-ahead on IO congestion. */ - if (inode_read_congested(mapping->host)) + if (inode_read_congested(rac->mapping->host)) return; if (blk_cgroup_congested()) return; /* do read-ahead */ - ondemand_readahead(&rac, ra, page, req_count); + ondemand_readahead(rac, ra, page, req_count); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Wed Sep 2 15:44:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751055 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D4DE1744 for ; Wed, 2 Sep 2020 15:45:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 101AE204FD for ; Wed, 2 Sep 2020 15:45:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LenUYkUY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728041AbgIBPpZ (ORCPT ); Wed, 2 Sep 2020 11:45:25 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:24228 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728400AbgIBPpB (ORCPT ); Wed, 2 Sep 2020 11:45:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061500; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w3IPCFyIVPVmMXkQUqgqDzASUCFftpliFw52FResyTI=; b=LenUYkUYDURFLWKqZ1gyGDsChcjJO4Kge351C27D3jNX5JtZ2eLi5bO2H/OnlN7ofEBZtp mk2h1zGv7yP8npWjSpgy39gH4xFG3G9NlMb5OJMT3/j5mTJdaLLCBghGTIyW/EYgLPmhPE enohdsmrbgJeFDhh+953UX/P4i50Mp0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-278-bP1wLkRtPECuZMNwkvjG6g-1; Wed, 02 Sep 2020 11:44:56 -0400 X-MC-Unique: bP1wLkRtPECuZMNwkvjG6g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1F81418C5202; Wed, 2 Sep 2020 15:44:55 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id 209F05C1C4; Wed, 2 Sep 2020 15:44:53 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 5/6] mm: Fold ra_submit() into do_sync_mmap_readahead() [ver #2] From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:44:53 +0100 Message-ID: <159906149326.663183.12774034343203621496.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Fold ra_submit() into its last remaining user and pass the previously added readahead_control struct down into __do_page_cache_readahead(). Signed-off-by: David Howells --- mm/filemap.c | 6 +++--- mm/internal.h | 10 ---------- 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index fdfeedd1eb71..eaa046fdc0b6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2500,10 +2500,10 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * mmap read-around */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - ra->start = max_t(long, 0, offset - ra->ra_pages / 2); - ra->size = ra->ra_pages; + ra->start = rac._index = max_t(long, 0, offset - ra->ra_pages / 2); + ra->size = ra->ra_pages; ra->async_size = ra->ra_pages / 4; - ra_submit(ra, mapping, file); + __do_page_cache_readahead(&rac, ra->size, ra->async_size); return fpin; } diff --git a/mm/internal.h b/mm/internal.h index c8ccf208f524..d62df5559500 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -53,16 +53,6 @@ void force_page_cache_readahead(struct readahead_control *, unsigned long); void __do_page_cache_readahead(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_size); -/* - * Submit IO for the read-ahead request in file_ra_state. - */ -static inline void ra_submit(struct file_ra_state *ra, - struct address_space *mapping, struct file *file) -{ - DEFINE_READAHEAD(rac, file, mapping, ra->start); - __do_page_cache_readahead(&rac, ra->size, ra->async_size); -} - /** * page_evictable - test whether a page is evictable * @page: the page to test From patchwork Wed Sep 2 15:45:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11751061 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 77C3A1744 for ; Wed, 2 Sep 2020 15:46:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B48E208B3 for ; Wed, 2 Sep 2020 15:46:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QHQhv3Z2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728479AbgIBPpe (ORCPT ); Wed, 2 Sep 2020 11:45:34 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:30205 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728474AbgIBPpS (ORCPT ); Wed, 2 Sep 2020 11:45:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599061507; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kp3dpQdctOfjShp2Jh9eA0jkHjcP1v7EI9LyqzTLfMc=; b=QHQhv3Z2RkWlMlnvtFfEux0Ok/zFGoKtSZZalaYtrxCj6i1OM1JR674KKPZU5wW7SA8DLp wXRf5+ca3qt39oGVU+UrMUqYOqVjpz3Bnc/np36lYRXbTys1KwAvy4bG5JA3J5BJN/VyEO K0BB+fvUnpvS9RvHDLJ5diL8+k2zczY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-299-n6zxdSpLNKSl960WC9bibQ-1; Wed, 02 Sep 2020 11:45:03 -0400 X-MC-Unique: n6zxdSpLNKSl960WC9bibQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3E1FA1DE02; Wed, 2 Sep 2020 15:45:02 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-6.rdu2.redhat.com [10.10.113.6]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3332B1002D73; Wed, 2 Sep 2020 15:45:01 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 6/6] mm: Pass a file_ra_state struct into force_page_cache_readahead() [ver #2] From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 02 Sep 2020 16:45:00 +0100 Message-ID: <159906150036.663183.11566577279669811013.stgit@warthog.procyon.org.uk> In-Reply-To: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> References: <159906145700.663183.3678164182141075453.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Pass a file_ra_state struct into force_page_cache_readahead(). One caller has one that should be passed in and the other doesn't, but the former needs to pass its in. Signed-off-by: David Howells --- mm/fadvise.c | 3 ++- mm/internal.h | 3 ++- mm/readahead.c | 5 ++--- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index 997f7c16690a..e1b09975caaa 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -106,7 +106,8 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) { DEFINE_READAHEAD(rac, file, mapping, start_index); - force_page_cache_readahead(&rac, nrpages); + force_page_cache_readahead(&rac, &rac.file->f_ra, + nrpages); } break; case POSIX_FADV_NOREUSE: diff --git a/mm/internal.h b/mm/internal.h index d62df5559500..ff7b549f6a9d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,7 +49,8 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -void force_page_cache_readahead(struct readahead_control *, unsigned long); +void force_page_cache_readahead(struct readahead_control *, struct file_ra_state *, + unsigned long); void __do_page_cache_readahead(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_size); diff --git a/mm/readahead.c b/mm/readahead.c index d8e3e59e4c46..3f3ce65afc64 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -272,11 +272,10 @@ void __do_page_cache_readahead(struct readahead_control *rac, * memory at once. */ void force_page_cache_readahead(struct readahead_control *rac, - unsigned long nr_to_read) + struct file_ra_state *ra, unsigned long nr_to_read) { struct address_space *mapping = rac->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &rac->file->f_ra; unsigned long max_pages, index; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && @@ -655,7 +654,7 @@ void page_cache_sync_readahead(struct readahead_control *rac, /* be dumb */ if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(rac, req_count); + force_page_cache_readahead(rac, ra, req_count); return; }