From patchwork Tue Sep 1 16:28:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749087 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB0FC109A for ; Tue, 1 Sep 2020 16:28:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D985206EF for ; Tue, 1 Sep 2020 16:28:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NZnOhBI/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D985206EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8EDC26B0098; Tue, 1 Sep 2020 12:28:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 89E5B6B009C; Tue, 1 Sep 2020 12:28:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B7146B009E; Tue, 1 Sep 2020 12:28:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 628226B0098 for ; Tue, 1 Sep 2020 12:28:29 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 247E21EFD for ; Tue, 1 Sep 2020 16:28:29 +0000 (UTC) X-FDA: 77215025538.18.ship34_14045a427099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id DCDDA100EDBC5 for ; Tue, 1 Sep 2020 16:28:28 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30034:30054:30064,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yg7nxrce3hkbw6xn8ijrd33rqtmypdpqayo8zw37b6zf5n1mrcjxm8zk5bzf4.ekkd6hmhobgrjfrnwjw4tr6ru48up3rupro416azwfxpa9cupbbeti1hycaqdi5.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: ship34_14045a427099 X-Filterd-Recvd-Size: 3527 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:28:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977707; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aTnz/AaJx61OeSfsF40kwfwZQcF1Wav17QC/9an/tGk=; b=NZnOhBI/ZnlExqDujBTv5XYWHW6OnVxiZcV/acCR1ZlitHgI3uNrKMa/nnOm5BYRIDFc0E TRj6S3AnMmN2y5oMaXmCXPJixSB69MMHkZRARWGlpf2RV+0bLW6hE9Mu9L7dDA1zccUH2X b0vROx7Q7GTttai/53Hjc2gQ4V/GBGk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-293-mAcqF4gmO_e3Y975aLiNFQ-1; Tue, 01 Sep 2020 12:28:26 -0400 X-MC-Unique: mAcqF4gmO_e3Y975aLiNFQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B41578014D9; Tue, 1 Sep 2020 16:28:24 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 456CE5C1A3; Tue, 1 Sep 2020 16:28:23 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 1/7] Fix khugepaged's request size in collapse_file() From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:22 +0100 Message-ID: <159897770245.405783.16506873187032379873.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Rspamd-Queue-Id: DCDDA100EDBC5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: collapse_file() in khugepaged passes PAGE_SIZE as the number of pages to be read ahead to page_cache_sync_readahead(). It seems this was expressed as a number of bytes rather than a number of pages. Fix it to use the number of pages to the end of the window instead. Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS") Signed-off-by: David Howells cc: Matthew Wilcox cc: Song Liu Acked-by: Song Liu --- mm/khugepaged.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6d199c353281..f2d243077b74 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1706,7 +1706,7 @@ static void collapse_file(struct mm_struct *mm, xas_unlock_irq(&xas); page_cache_sync_readahead(mapping, &file->f_ra, file, index, - PAGE_SIZE); + end - index); /* drain pagevecs to help isolate_lru_page() */ lru_add_drain(); page = find_lock_page(mapping, index); From patchwork Tue Sep 1 16:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749091 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39793109A for ; Tue, 1 Sep 2020 16:28:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F0D4F2100A for ; Tue, 1 Sep 2020 16:28:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DD9ihPBH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0D4F2100A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D41C6B009C; Tue, 1 Sep 2020 12:28:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2846F6B009F; Tue, 1 Sep 2020 12:28:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 199696B00A0; Tue, 1 Sep 2020 12:28:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 057F96B009C for ; Tue, 1 Sep 2020 12:28:37 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BF7613623 for ; Tue, 1 Sep 2020 16:28:36 +0000 (UTC) X-FDA: 77215025832.19.print66_060150727099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 8D2811AD1B1 for ; Tue, 1 Sep 2020 16:28:36 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30036:30045:30054,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8yceqyjyd8y6ib3kfzxyzymuajop1pgfo1kdbh3f4mm3dfkei7tpnwax8e5s.uk4qt31hrax4j9a13gty1n6bdou9fzjx9zopi9jjo5nwdx4zjn5nun7cpzp475m.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: print66_060150727099 X-Filterd-Recvd-Size: 6388 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:28:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977715; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ezAmTShFf1lpnkxSzlatynP4JebRzAzt5S9sCe1UvW4=; b=DD9ihPBHb/8qwx1nB1dQZG2tXdZwkdxndt0ss9mlT0zqZF9Xyw2Dxe10vltL9MJCtniZPl edZovVhTQD3r6Oqc1iETVUgUTjiWWodopptaur1urdBayHwkOquyAAjkTRaJBo8QyaWQjx gAYIZxVE4aNdDe4/ByyTeqc3i0qCSNs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-170-TaUdUYt9NE6gk3tKY-XS6A-1; Tue, 01 Sep 2020 12:28:33 -0400 X-MC-Unique: TaUdUYt9NE6gk3tKY-XS6A-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 84DAB807333; Tue, 1 Sep 2020 16:28:32 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id B56587EB7D; Tue, 1 Sep 2020 16:28:30 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 2/7] mm: Make ondemand_readahead() take a readahead_control struct From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:29 +0100 Message-ID: <159897770995.405783.3301406968621486886.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Rspamd-Queue-Id: 8D2811AD1B1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make ondemand_readahead() take a readahead_control struct in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- mm/readahead.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 91859e6e2b7d..0e16fb4809f5 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -511,14 +511,15 @@ static bool page_cache_readahead_order(struct readahead_control *rac, /* * A minimal readahead algorithm for trivial sequential/random reads. */ -static void ondemand_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *file, - struct page *page, pgoff_t index, unsigned long req_size) +static void ondemand_readahead(struct readahead_control *rac, + struct file_ra_state *ra, + struct page *page) { - DEFINE_READAHEAD(rac, file, mapping, index); - struct backing_dev_info *bdi = inode_to_bdi(mapping->host); + struct backing_dev_info *bdi = inode_to_bdi(rac->mapping->host); unsigned long max_pages = ra->ra_pages; unsigned long add_pages; + unsigned long req_size = rac->_nr_pages; + unsigned long index = rac->_index; pgoff_t prev_index; /* @@ -556,7 +557,7 @@ static void ondemand_readahead(struct address_space *mapping, pgoff_t start; rcu_read_lock(); - start = page_cache_next_miss(mapping, index + 1, max_pages); + start = page_cache_next_miss(rac->mapping, index + 1, max_pages); rcu_read_unlock(); if (!start || start - index > max_pages) @@ -589,14 +590,14 @@ static void ondemand_readahead(struct address_space *mapping, * Query the page cache and look for the traces(cached history pages) * that a sequential stream would leave behind. */ - if (try_context_readahead(mapping, ra, index, req_size, max_pages)) + if (try_context_readahead(rac->mapping, ra, index, req_size, max_pages)) goto readit; /* * standalone, small random read * Read as is, and do not pollute the readahead state. */ - __do_page_cache_readahead(&rac, req_size, 0); + __do_page_cache_readahead(rac, req_size, 0); return; initial_readahead: @@ -622,10 +623,10 @@ static void ondemand_readahead(struct address_space *mapping, } } - rac._index = ra->start; - if (page && page_cache_readahead_order(&rac, ra, thp_order(page))) + rac->_index = ra->start; + if (page && page_cache_readahead_order(rac, ra, thp_order(page))) return; - __do_page_cache_readahead(&rac, ra->size, ra->async_size); + __do_page_cache_readahead(rac, ra->size, ra->async_size); } /** @@ -645,6 +646,9 @@ void page_cache_sync_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, pgoff_t index, unsigned long req_count) { + DEFINE_READAHEAD(rac, filp, mapping, index); + rac._nr_pages = req_count; + /* no read-ahead */ if (!ra->ra_pages) return; @@ -659,7 +663,7 @@ void page_cache_sync_readahead(struct address_space *mapping, } /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, NULL, index, req_count); + ondemand_readahead(&rac, ra, NULL); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); @@ -683,7 +687,10 @@ page_cache_async_readahead(struct address_space *mapping, struct page *page, pgoff_t index, unsigned long req_count) { - /* no read-ahead */ + DEFINE_READAHEAD(rac, filp, mapping, index); + rac._nr_pages = req_count; + + /* No Read-ahead */ if (!ra->ra_pages) return; @@ -705,7 +712,7 @@ page_cache_async_readahead(struct address_space *mapping, return; /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, page, index, req_count); + ondemand_readahead(&rac, ra, page); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Tue Sep 1 16:28:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749095 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54733109A for ; Tue, 1 Sep 2020 16:28:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2076B208CA for ; Tue, 1 Sep 2020 16:28:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hKY8W0Eo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2076B208CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 480276B009F; Tue, 1 Sep 2020 12:28:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 457556B00A1; Tue, 1 Sep 2020 12:28:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36D576B00A2; Tue, 1 Sep 2020 12:28:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 230D16B009F for ; Tue, 1 Sep 2020 12:28:44 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DB8DE3623 for ; Tue, 1 Sep 2020 16:28:43 +0000 (UTC) X-FDA: 77215026126.11.leg50_1c08d9227099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 70068180F8B86 for ; Tue, 1 Sep 2020 16:28:43 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30036:30054:30074,0,RBL:216.205.24.124:@redhat.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04ygfwrjbfn7ha8rt4pbtifkrtrdyoc8ea8y8w6qk314nfxmprw5isj367aomkf.ndcqoyy4yoswncun7wnrganj7rx5tagg78mhy3q86wrmwszq1rcyi18arjanbuh.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: leg50_1c08d9227099 X-Filterd-Recvd-Size: 6067 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:28:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977722; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QThCaj3yKRlrpSDZKjm2r3cHMRA+tuJqvE+bEULa31k=; b=hKY8W0EojegLn2OO9CxiZVGBfQUsF634ej5t1osrDKh4+Iznl0EMYox+mG7RH2KpUsnxma 6Rzu22WXrr5onKPyhA07P7k518FgpjLM1NO56G2KfYlTzY+L4EpSlGnmEgtOA9WWonYqd5 HRp/W6Antma0wH7zidu0NuV+yKem+gU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-223--a5lUd7MOXaS6lKak9CuOQ-1; Tue, 01 Sep 2020 12:28:40 -0400 X-MC-Unique: -a5lUd7MOXaS6lKak9CuOQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A14051007467; Tue, 1 Sep 2020 16:28:39 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8C40378B40; Tue, 1 Sep 2020 16:28:38 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 3/7] mm: Push readahead_control down into force_page_cache_readahead() From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:37 +0100 Message-ID: <159897771776.405783.305183815956274924.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Rspamd-Queue-Id: 70068180F8B86 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push readahead_control down into force_page_cache_readahead() from its callers in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- mm/fadvise.c | 6 +++++- mm/internal.h | 3 +-- mm/readahead.c | 20 ++++++++++++-------- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index 0e66f2aaeea3..b68d2f2959d5 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -104,7 +104,11 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) if (!nrpages) nrpages = ~0UL; - force_page_cache_readahead(mapping, file, start_index, nrpages); + { + DEFINE_READAHEAD(rac, file, mapping, start_index); + rac._nr_pages = nrpages; + force_page_cache_readahead(&rac); + } break; case POSIX_FADV_NOREUSE: break; diff --git a/mm/internal.h b/mm/internal.h index bf2bee6c42a1..2eb9f7f5f134 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,8 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -void force_page_cache_readahead(struct address_space *, struct file *, - pgoff_t index, unsigned long nr_to_read); +void force_page_cache_readahead(struct readahead_control *); void __do_page_cache_readahead(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_size); diff --git a/mm/readahead.c b/mm/readahead.c index 0e16fb4809f5..e557c6d5a183 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -271,13 +271,12 @@ void __do_page_cache_readahead(struct readahead_control *rac, * Chunk the readahead into 2 megabyte units, so that we don't pin too much * memory at once. */ -void force_page_cache_readahead(struct address_space *mapping, - struct file *file, pgoff_t index, unsigned long nr_to_read) +void force_page_cache_readahead(struct readahead_control *rac) { - DEFINE_READAHEAD(rac, file, mapping, index); + struct address_space *mapping = rac->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &file->f_ra; - unsigned long max_pages; + struct file_ra_state *ra = &rac->file->f_ra; + unsigned long max_pages, index, nr_to_read; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && !mapping->a_ops->readahead)) @@ -287,14 +286,19 @@ void force_page_cache_readahead(struct address_space *mapping, * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size */ + index = readahead_index(rac); max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); - nr_to_read = min(nr_to_read, max_pages); + nr_to_read = min_t(unsigned long, readahead_count(rac), max_pages); while (nr_to_read) { unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE; if (this_chunk > nr_to_read) this_chunk = nr_to_read; - __do_page_cache_readahead(&rac, this_chunk, 0); + + rac->_index = index; + rac->_nr_pages = this_chunk; + // Do I need to modify rac->_batch_count? + __do_page_cache_readahead(rac, this_chunk, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -658,7 +662,7 @@ void page_cache_sync_readahead(struct address_space *mapping, /* be dumb */ if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(mapping, filp, index, req_count); + force_page_cache_readahead(&rac); return; } From patchwork Tue Sep 1 16:28:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749099 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F04141575 for ; Tue, 1 Sep 2020 16:28:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A203C20BED for ; Tue, 1 Sep 2020 16:28:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="QwHsy9Pj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A203C20BED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A73126B00A1; Tue, 1 Sep 2020 12:28:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9FCBB6B00A3; Tue, 1 Sep 2020 12:28:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89CE4900002; Tue, 1 Sep 2020 12:28:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 6A26D6B00A1 for ; Tue, 1 Sep 2020 12:28:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 08E301EF2 for ; Tue, 1 Sep 2020 16:28:54 +0000 (UTC) X-FDA: 77215026588.26.wrist39_0f06aef27099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id D22AF1804B655 for ; Tue, 1 Sep 2020 16:28:53 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30034:30036:30045:30054:30070:30090,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8757z5bt6yjqtoe5oqk8qx1brtocahr7k1h91s57ah5ykn9ydr1xiaoay9xz.zop1ehn9qctq4pnccm8p9otqux7ikieusa1uayik16cmbmr9z1ghb4rmwqahezj.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wrist39_0f06aef27099 X-Filterd-Recvd-Size: 17799 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:28:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977732; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rma8EEYokNJyj7U2S+lyQcYfQd/6lZ6gn4LjeXLRtak=; b=QwHsy9PjDVqQK86Kt6cwmor/NyrMdX3waMZDCKwgH9DsEDB1I6hAmELF+BvhpMj5ueA+dT HxEtdsRRiQ2nlH+P31PXdZ03njWRQ+IXGGTfBen54HLbwAPlWXvi7hSv9Ox0AvY3BFoWta k255kx0h9m1vZNDBBOSJq67vnE7P+4A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-502-ON2iMVIQMum_xgurU1Ybdw-1; Tue, 01 Sep 2020 12:28:48 -0400 X-MC-Unique: ON2iMVIQMum_xgurU1Ybdw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E89211DDF3; Tue, 1 Sep 2020 16:28:46 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF2945C22D; Tue, 1 Sep 2020 16:28:45 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 4/7] mm: Pass readahead_control into page_cache_{sync,async}_readahead() From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:44 +0100 Message-ID: <159897772488.405783.17347371323944662006.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Rspamd-Queue-Id: D22AF1804B655 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass struct readahead_control into the page_cache_{sync,async}_readahead() functions in preparation for making do_sync_mmap_readahead() pass down an RAC struct. Signed-off-by: David Howells --- fs/btrfs/free-space-cache.c | 7 ++++--- fs/btrfs/ioctl.c | 10 +++++++--- fs/btrfs/relocation.c | 14 ++++++++------ fs/btrfs/send.c | 15 +++++++++------ fs/ext4/dir.c | 12 +++++++----- fs/f2fs/dir.c | 10 +++++++--- include/linux/pagemap.h | 8 +++----- mm/filemap.c | 27 +++++++++++++++------------ mm/khugepaged.c | 6 +++--- mm/readahead.c | 40 ++++++++++++---------------------------- 10 files changed, 75 insertions(+), 74 deletions(-) diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index dc82fd0c80cb..0ca9361acf30 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -286,16 +286,17 @@ int btrfs_truncate_free_space_cache(struct btrfs_trans_handle *trans, static void readahead_cache(struct inode *inode) { struct file_ra_state *ra; - unsigned long last_index; + + DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0); ra = kzalloc(sizeof(*ra), GFP_NOFS); if (!ra) return; file_ra_state_init(ra, inode->i_mapping); - last_index = (i_size_read(inode) - 1) >> PAGE_SHIFT; + rac._nr_pages = (i_size_read(inode) - 1) >> PAGE_SHIFT; - page_cache_sync_readahead(inode->i_mapping, ra, NULL, 0, last_index); + page_cache_sync_readahead(&rac, ra); kfree(ra); } diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index bd3511c5ca81..5025a6a800e9 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1428,6 +1428,8 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, struct page **pages = NULL; bool do_compress = range->flags & BTRFS_DEFRAG_RANGE_COMPRESS; + DEFINE_READAHEAD(rac, file, inode->i_mapping, 0); + if (isize == 0) return 0; @@ -1534,9 +1536,11 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, if (i + cluster > ra_index) { ra_index = max(i, ra_index); - if (ra) - page_cache_sync_readahead(inode->i_mapping, ra, - file, ra_index, cluster); + if (ra) { + rac._index = ra_index; + rac._nr_pages = cluster; + page_cache_sync_readahead(&rac, ra); + } ra_index += cluster; } diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 4ba1ab9cc76d..1979803fd475 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2684,6 +2684,8 @@ static int relocate_file_extent_cluster(struct inode *inode, int nr = 0; int ret = 0; + DEFINE_READAHEAD(rac, NULL, inode->i_mapping, 0); + if (!cluster->nr) return 0; @@ -2712,9 +2714,9 @@ static int relocate_file_extent_cluster(struct inode *inode, page = find_lock_page(inode->i_mapping, index); if (!page) { - page_cache_sync_readahead(inode->i_mapping, - ra, NULL, index, - last_index + 1 - index); + rac._index = index; + rac._nr_pages = last_index + 1 - index; + page_cache_sync_readahead(&rac, ra); page = find_or_create_page(inode->i_mapping, index, mask); if (!page) { @@ -2728,9 +2730,9 @@ static int relocate_file_extent_cluster(struct inode *inode, } if (PageReadahead(page)) { - page_cache_async_readahead(inode->i_mapping, - ra, NULL, page, index, - last_index + 1 - index); + rac._index = index; + rac._nr_pages = last_index + 1 - index; + page_cache_async_readahead(&rac, ra, page); } if (!PageUptodate(page)) { diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d9813a5b075a..ee0a9a2b5d08 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4811,6 +4811,8 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) unsigned pg_offset = offset_in_page(offset); ssize_t ret = 0; + DEFINE_READAHEAD(rac, NULL, NULL, 0); + inode = btrfs_iget(fs_info->sb, sctx->cur_ino, root); if (IS_ERR(inode)) return PTR_ERR(inode); @@ -4829,15 +4831,18 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) /* initial readahead */ memset(&sctx->ra, 0, sizeof(struct file_ra_state)); file_ra_state_init(&sctx->ra, inode->i_mapping); + rac.mapping = inode->i_mapping; while (index <= last_index) { unsigned cur_len = min_t(unsigned, len, PAGE_SIZE - pg_offset); + rac._index = index; + rac._nr_pages = last_index + 1 - index; + page = find_lock_page(inode->i_mapping, index); if (!page) { - page_cache_sync_readahead(inode->i_mapping, &sctx->ra, - NULL, index, last_index + 1 - index); + page_cache_sync_readahead(&rac, &sctx->ra); page = find_or_create_page(inode->i_mapping, index, GFP_KERNEL); @@ -4847,10 +4852,8 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len) } } - if (PageReadahead(page)) { - page_cache_async_readahead(inode->i_mapping, &sctx->ra, - NULL, page, index, last_index + 1 - index); - } + if (PageReadahead(page)) + page_cache_async_readahead(&rac, &sctx->ra, page); if (!PageUptodate(page)) { btrfs_readpage(NULL, page); diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c index 1d82336b1cd4..6205c6830454 100644 --- a/fs/ext4/dir.c +++ b/fs/ext4/dir.c @@ -118,6 +118,8 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) struct buffer_head *bh = NULL; struct fscrypt_str fstr = FSTR_INIT(NULL, 0); + DEFINE_READAHEAD(rac, file, sb->s_bdev->bd_inode->i_mapping, 0); + if (IS_ENCRYPTED(inode)) { err = fscrypt_get_encryption_info(inode); if (err) @@ -176,11 +178,11 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) if (err > 0) { pgoff_t index = map.m_pblk >> (PAGE_SHIFT - inode->i_blkbits); - if (!ra_has_index(&file->f_ra, index)) - page_cache_sync_readahead( - sb->s_bdev->bd_inode->i_mapping, - &file->f_ra, file, - index, 1); + if (!ra_has_index(&file->f_ra, index)) { + rac._index = index; + rac._nr_pages = 1; + page_cache_sync_readahead(&rac, &file->f_ra); + } file->f_ra.prev_pos = (loff_t)index << PAGE_SHIFT; bh = ext4_bread(NULL, inode, map.m_lblk, 0); if (IS_ERR(bh)) { diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 069f498af1e3..982f6d37454a 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -1027,6 +1027,8 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) struct fscrypt_str fstr = FSTR_INIT(NULL, 0); int err = 0; + DEFINE_READAHEAD(rac, file, inode->i_mapping, 0); + if (IS_ENCRYPTED(inode)) { err = fscrypt_get_encryption_info(inode); if (err) @@ -1052,9 +1054,11 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) cond_resched(); /* readahead for multi pages of dir */ - if (npages - n > 1 && !ra_has_index(ra, n)) - page_cache_sync_readahead(inode->i_mapping, ra, file, n, - min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES)); + if (npages - n > 1 && !ra_has_index(ra, n)) { + rac._index = n; + rac._nr_pages = min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES); + page_cache_sync_readahead(&rac, ra); + } dentry_page = f2fs_find_data_page(inode, n); if (IS_ERR(dentry_page)) { diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8bf048a76c43..cd7bde29d4cc 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -769,11 +769,9 @@ void delete_from_page_cache_batch(struct address_space *mapping, #define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) -void page_cache_sync_readahead(struct address_space *, struct file_ra_state *, - struct file *, pgoff_t index, unsigned long req_count); -void page_cache_async_readahead(struct address_space *, struct file_ra_state *, - struct file *, struct page *, pgoff_t index, - unsigned long req_count); +void page_cache_sync_readahead(struct readahead_control *, struct file_ra_state *); +void page_cache_async_readahead(struct readahead_control *, struct file_ra_state *, + struct page *); void page_cache_readahead_unbounded(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_count); diff --git a/mm/filemap.c b/mm/filemap.c index 82b97cf4306c..9f2f99db7318 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2070,6 +2070,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, unsigned int prev_offset; int error = 0; + DEFINE_READAHEAD(rac, filp, mapping, 0); + if (unlikely(*ppos >= inode->i_sb->s_maxbytes)) return 0; iov_iter_truncate(iter, inode->i_sb->s_maxbytes); @@ -2097,9 +2099,9 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, if (!page) { if (iocb->ki_flags & IOCB_NOIO) goto would_block; - page_cache_sync_readahead(mapping, - ra, filp, - index, last_index - index); + rac._index = index; + rac._nr_pages = last_index - index; + page_cache_sync_readahead(&rac, ra); page = find_get_page(mapping, index); if (unlikely(page == NULL)) goto no_cached_page; @@ -2109,9 +2111,9 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, put_page(page); goto out; } - page_cache_async_readahead(mapping, - ra, filp, thp_head(page), - index, last_index - index); + rac._index = index; + rac._nr_pages = last_index - index; + page_cache_async_readahead(&rac, ra, thp_head(page)); } if (!PageUptodate(page)) { /* @@ -2469,6 +2471,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) pgoff_t offset = vmf->pgoff; unsigned int mmap_miss; + DEFINE_READAHEAD(rac, file, mapping, offset); + /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ) return fpin; @@ -2477,8 +2481,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) if (vmf->vma->vm_flags & VM_SEQ_READ) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); - page_cache_sync_readahead(mapping, ra, file, offset, - ra->ra_pages); + rac._nr_pages = ra->ra_pages; + page_cache_sync_readahead(&rac, ra); return fpin; } @@ -2515,10 +2519,10 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, { struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; - struct address_space *mapping = file->f_mapping; struct file *fpin = NULL; unsigned int mmap_miss; - pgoff_t offset = vmf->pgoff; + + DEFINE_READAHEAD(rac, file, file->f_mapping, vmf->pgoff); /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) @@ -2528,8 +2532,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, WRITE_ONCE(ra->mmap_miss, --mmap_miss); if (PageReadahead(thp_head(page))) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); - page_cache_async_readahead(mapping, ra, file, - thp_head(page), offset, ra->ra_pages); + page_cache_async_readahead(&rac, ra, thp_head(page)); } return fpin; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f2d243077b74..0bece7ab0ce7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1703,10 +1703,10 @@ static void collapse_file(struct mm_struct *mm, } } else { /* !is_shmem */ if (!page || xa_is_value(page)) { + DEFINE_READAHEAD(rac, file, mapping, index); + rac._nr_pages = end - index; xas_unlock_irq(&xas); - page_cache_sync_readahead(mapping, &file->f_ra, - file, index, - end - index); + page_cache_sync_readahead(&rac, &file->f_ra); /* drain pagevecs to help isolate_lru_page() */ lru_add_drain(); page = find_lock_page(mapping, index); diff --git a/mm/readahead.c b/mm/readahead.c index e557c6d5a183..7114246b4e41 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -635,24 +635,17 @@ static void ondemand_readahead(struct readahead_control *rac, /** * page_cache_sync_readahead - generic file readahead - * @mapping: address_space which holds the pagecache and I/O vectors + * @rac: Readahead control. * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() - * @index: Index of first page to be read. - * @req_count: Total number of pages being read by the caller. * * page_cache_sync_readahead() should be called when a cache miss happened: * it will submit the read. The readahead logic may decide to piggyback more * pages onto the read request if access patterns suggest it will improve * performance. */ -void page_cache_sync_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - pgoff_t index, unsigned long req_count) +void page_cache_sync_readahead(struct readahead_control *rac, + struct file_ra_state *ra) { - DEFINE_READAHEAD(rac, filp, mapping, index); - rac._nr_pages = req_count; - /* no read-ahead */ if (!ra->ra_pages) return; @@ -661,39 +654,30 @@ void page_cache_sync_readahead(struct address_space *mapping, return; /* be dumb */ - if (filp && (filp->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(&rac); + if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) { + force_page_cache_readahead(rac); return; } /* do read-ahead */ - ondemand_readahead(&rac, ra, NULL); + ondemand_readahead(rac, ra, NULL); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); /** * page_cache_async_readahead - file readahead for marked pages - * @mapping: address_space which holds the pagecache and I/O vectors + * @rac: Readahead control. * @ra: file_ra_state which holds the readahead state - * @filp: passed on to ->readpage() and ->readpages() - * @page: The page at @index which triggered the readahead call. - * @index: Index of first page to be read. - * @req_count: Total number of pages being read by the caller. * * page_cache_async_readahead() should be called when a page is used which * is marked as PageReadahead; this is a marker to suggest that the application * has used up enough of the readahead window that we should start pulling in * more pages. */ -void -page_cache_async_readahead(struct address_space *mapping, - struct file_ra_state *ra, struct file *filp, - struct page *page, pgoff_t index, - unsigned long req_count) +void page_cache_async_readahead(struct readahead_control *rac, + struct file_ra_state *ra, + struct page *page) { - DEFINE_READAHEAD(rac, filp, mapping, index); - rac._nr_pages = req_count; - /* No Read-ahead */ if (!ra->ra_pages) return; @@ -709,14 +693,14 @@ page_cache_async_readahead(struct address_space *mapping, /* * Defer asynchronous read-ahead on IO congestion. */ - if (inode_read_congested(mapping->host)) + if (inode_read_congested(rac->mapping->host)) return; if (blk_cgroup_congested()) return; /* do read-ahead */ - ondemand_readahead(&rac, ra, page); + ondemand_readahead(rac, ra, page); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Tue Sep 1 16:28:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 57E0713B6 for ; Tue, 1 Sep 2020 16:29:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 160E9206EF for ; Tue, 1 Sep 2020 16:29:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CxYHi2eT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 160E9206EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D4596B00A3; Tue, 1 Sep 2020 12:29:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 283476B00A5; Tue, 1 Sep 2020 12:29:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14DFA900002; Tue, 1 Sep 2020 12:29:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id E81D56B00A3 for ; Tue, 1 Sep 2020 12:28:59 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B1DFD1EF2 for ; Tue, 1 Sep 2020 16:28:59 +0000 (UTC) X-FDA: 77215026798.04.pin25_1d02f4727099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 7FE5C800ABC2 for ; Tue, 1 Sep 2020 16:28:59 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30045:30054:30070:30074:30090,0,RBL:207.211.31.120:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04yf88y7fa83xzecybdb5dks8okuyypag4onn3ynrfmctgaoupdyghdo6rihsjt.9eruwd5fu4kxmy1uk7ky7rw3s17hzech1mt835cacte7njajzd48ku9iducb93g.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: pin25_1d02f4727099 X-Filterd-Recvd-Size: 8837 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:28:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977738; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sChPbft+Yjw8rk61mHgyqBrpWiSpcftbFDciQ4tpAS4=; b=CxYHi2eTBLXnRa8fo5mkKqMnKaUeYm1p3DU9DMdZKgXrVsB1XQYU8gNPLoASUVsSqYw8db P2rWeayWgHAvKQYYfRpcNbJt4zwdFdJVCkNMoeci6lkDhHmBJgjpm5AW7bBvMe3FZTvCQA t9mzw0n99f8IT822DAzVci1aXWmQwwc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-463-_kh1rbyIPdafDI3y-0tX2A-1; Tue, 01 Sep 2020 12:28:55 -0400 X-MC-Unique: _kh1rbyIPdafDI3y-0tX2A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8753110ABDA1; Tue, 1 Sep 2020 16:28:54 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 632785D9D3; Tue, 1 Sep 2020 16:28:53 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 5/7] mm: Make __do_page_cache_readahead() use rac->_nr_pages From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:52 +0100 Message-ID: <159897773253.405783.7186877407321511610.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Rspamd-Queue-Id: 7FE5C800ABC2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make __do_page_cache_readahead() use rac->_nr_pages rather than passing in nr_to_read argument. Signed-off-by: David Howells --- fs/ext4/verity.c | 8 +++++--- fs/f2fs/verity.c | 8 +++++--- include/linux/pagemap.h | 3 +-- mm/internal.h | 6 +++--- mm/readahead.c | 20 +++++++++++--------- 5 files changed, 25 insertions(+), 20 deletions(-) diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c index 6fc2dbc87c0b..3d377110e839 100644 --- a/fs/ext4/verity.c +++ b/fs/ext4/verity.c @@ -356,10 +356,12 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode, page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED); if (!page || !PageUptodate(page)) { - if (page) + if (page) { put_page(page); - else if (num_ra_pages > 1) - page_cache_readahead_unbounded(&rac, num_ra_pages, 0); + } else if (num_ra_pages > 1) { + rac._nr_pages = num_ra_pages; + page_cache_readahead_unbounded(&rac, 0); + } page = read_mapping_page(inode->i_mapping, index, NULL); } return page; diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c index 392dd07f4214..8445eed5a1bc 100644 --- a/fs/f2fs/verity.c +++ b/fs/f2fs/verity.c @@ -235,10 +235,12 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode, page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED); if (!page || !PageUptodate(page)) { - if (page) + if (page) { put_page(page); - else if (num_ra_pages > 1) - page_cache_readahead_unbounded(&rac, num_ra_pages, 0); + } else if (num_ra_pages > 1) { + rac._nr_pages = num_ra_pages; + page_cache_readahead_unbounded(&rac, 0); + } page = read_mapping_page(inode->i_mapping, index, NULL); } return page; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index cd7bde29d4cc..72e9c44d62bb 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -772,8 +772,7 @@ void delete_from_page_cache_batch(struct address_space *mapping, void page_cache_sync_readahead(struct readahead_control *, struct file_ra_state *); void page_cache_async_readahead(struct readahead_control *, struct file_ra_state *, struct page *); -void page_cache_readahead_unbounded(struct readahead_control *, - unsigned long nr_to_read, unsigned long lookahead_count); +void page_cache_readahead_unbounded(struct readahead_control *, unsigned long); /* * Like add_to_page_cache_locked, but used to add newly allocated pages: diff --git a/mm/internal.h b/mm/internal.h index 2eb9f7f5f134..e1d296e76fb0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -50,8 +50,7 @@ void unmap_page_range(struct mmu_gather *tlb, struct zap_details *details); void force_page_cache_readahead(struct readahead_control *); -void __do_page_cache_readahead(struct readahead_control *, - unsigned long nr_to_read, unsigned long lookahead_size); +void __do_page_cache_readahead(struct readahead_control *, unsigned long); /* * Submit IO for the read-ahead request in file_ra_state. @@ -60,7 +59,8 @@ static inline void ra_submit(struct file_ra_state *ra, struct address_space *mapping, struct file *file) { DEFINE_READAHEAD(rac, file, mapping, ra->start); - __do_page_cache_readahead(&rac, ra->size, ra->async_size); + rac._nr_pages = ra->size; + __do_page_cache_readahead(&rac, ra->async_size); } /** diff --git a/mm/readahead.c b/mm/readahead.c index 7114246b4e41..28ff80304a21 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -172,10 +172,11 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages, * May sleep, but will not reenter filesystem to reclaim memory. */ void page_cache_readahead_unbounded(struct readahead_control *rac, - unsigned long nr_to_read, unsigned long lookahead_size) + unsigned long lookahead_size) { struct address_space *mapping = rac->mapping; unsigned long index = readahead_index(rac); + unsigned long nr_to_read = readahead_count(rac); LIST_HEAD(page_pool); gfp_t gfp_mask = readahead_gfp_mask(mapping); unsigned long i; @@ -195,6 +196,7 @@ void page_cache_readahead_unbounded(struct readahead_control *rac, /* * Preallocate as many pages as we will need. */ + rac->_nr_pages = 0; for (i = 0; i < nr_to_read; i++) { struct page *page = xa_load(&mapping->i_pages, index + i); @@ -247,7 +249,7 @@ EXPORT_SYMBOL_GPL(page_cache_readahead_unbounded); * We really don't want to intermingle reads and writes like that. */ void __do_page_cache_readahead(struct readahead_control *rac, - unsigned long nr_to_read, unsigned long lookahead_size) + unsigned long lookahead_size) { struct inode *inode = rac->mapping->host; unsigned long index = readahead_index(rac); @@ -261,10 +263,10 @@ void __do_page_cache_readahead(struct readahead_control *rac, if (index > end_index) return; /* Don't read past the page containing the last byte of the file */ - if (nr_to_read > end_index - index) - nr_to_read = end_index - index + 1; + if (readahead_count(rac) > end_index - index) + rac->_nr_pages = end_index - index + 1; - page_cache_readahead_unbounded(rac, nr_to_read, lookahead_size); + page_cache_readahead_unbounded(rac, lookahead_size); } /* @@ -297,8 +299,7 @@ void force_page_cache_readahead(struct readahead_control *rac) rac->_index = index; rac->_nr_pages = this_chunk; - // Do I need to modify rac->_batch_count? - __do_page_cache_readahead(rac, this_chunk, 0); + __do_page_cache_readahead(rac, 0); index += this_chunk; nr_to_read -= this_chunk; @@ -601,7 +602,7 @@ static void ondemand_readahead(struct readahead_control *rac, * standalone, small random read * Read as is, and do not pollute the readahead state. */ - __do_page_cache_readahead(rac, req_size, 0); + __do_page_cache_readahead(rac, 0); return; initial_readahead: @@ -630,7 +631,8 @@ static void ondemand_readahead(struct readahead_control *rac, rac->_index = ra->start; if (page && page_cache_readahead_order(rac, ra, thp_order(page))) return; - __do_page_cache_readahead(rac, ra->size, ra->async_size); + rac->_nr_pages = ra->size; + __do_page_cache_readahead(rac, ra->async_size); } /** From patchwork Tue Sep 1 16:28:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749107 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A47B109A for ; Tue, 1 Sep 2020 16:29:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17332206EF for ; Tue, 1 Sep 2020 16:29:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ERPDUKbi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17332206EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 394616B00A5; Tue, 1 Sep 2020 12:29:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 31FB4900002; Tue, 1 Sep 2020 12:29:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E7DD6B00A8; Tue, 1 Sep 2020 12:29:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 032966B00A5 for ; Tue, 1 Sep 2020 12:29:05 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BBED71EF2 for ; Tue, 1 Sep 2020 16:29:05 +0000 (UTC) X-FDA: 77215027050.02.cord71_2e15dca27099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9011210097AA0 for ; Tue, 1 Sep 2020 16:29:05 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30036:30051:30054:30070,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yfq8mutc5soxncqedzwt4oezq8qypueoyf41zcsmwo174dhm3kuokpq8zm6qi.3nnwafzh4w8wyfi7caj44nfmxofbhbtmqq57jkzpm5zut8saue8xpprmxeim3ht.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cord71_2e15dca27099 X-Filterd-Recvd-Size: 4193 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977744; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FL08+vWWj3quE65ajNi5b46OkqhFrRGb41bV5AnZLQw=; b=ERPDUKbiPR5FVuNiuVEzVVqisPVM1AXybDncVMqJZcK0eAkuvYXuiHeYK7TtlVDfwbIfI1 IKZgET3CbKO89G2i/oxOZhtFd5MbwurD3t8auJJwMVRtTiCVm5c3X1CJfccGfyfSlzHQTJ gUsggv7cURyLLI6fO6GUL7fBxzh077M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-405-dCtNfjpfN_W8gxPWXNEBUg-1; Tue, 01 Sep 2020 12:29:02 -0400 X-MC-Unique: dCtNfjpfN_W8gxPWXNEBUg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9EABB8014D9; Tue, 1 Sep 2020 16:29:01 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E1D11002D6F; Tue, 1 Sep 2020 16:29:00 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 6/7] mm: Fold ra_submit() into do_sync_mmap_readahead() From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:28:59 +0100 Message-ID: <159897773980.405783.13680099265521545037.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Rspamd-Queue-Id: 9011210097AA0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Fold ra_submit() into its last remaining user and pass the previously added readahead_control struct down into __do_page_cache_readahead(). Signed-off-by: David Howells --- mm/filemap.c | 6 +++--- mm/internal.h | 11 ----------- 2 files changed, 3 insertions(+), 14 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 9f2f99db7318..c22bb01e8ba6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2502,10 +2502,10 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * mmap read-around */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - ra->start = max_t(long, 0, offset - ra->ra_pages / 2); - ra->size = ra->ra_pages; + ra->start = rac._index = max_t(long, 0, offset - ra->ra_pages / 2); + ra->size = rac._nr_pages = ra->ra_pages; ra->async_size = ra->ra_pages / 4; - ra_submit(ra, mapping, file); + __do_page_cache_readahead(&rac, ra->async_size); return fpin; } diff --git a/mm/internal.h b/mm/internal.h index e1d296e76fb0..de3b2ce2743a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -52,17 +52,6 @@ void unmap_page_range(struct mmu_gather *tlb, void force_page_cache_readahead(struct readahead_control *); void __do_page_cache_readahead(struct readahead_control *, unsigned long); -/* - * Submit IO for the read-ahead request in file_ra_state. - */ -static inline void ra_submit(struct file_ra_state *ra, - struct address_space *mapping, struct file *file) -{ - DEFINE_READAHEAD(rac, file, mapping, ra->start); - rac._nr_pages = ra->size; - __do_page_cache_readahead(&rac, ra->async_size); -} - /** * page_evictable - test whether a page is evictable * @page: the page to test From patchwork Tue Sep 1 16:29:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11749113 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3459113B6 for ; Tue, 1 Sep 2020 16:29:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 011B4206EF for ; Tue, 1 Sep 2020 16:29:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FSqYBgSG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 011B4206EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2AA346B00A8; Tue, 1 Sep 2020 12:29:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 259986B00A9; Tue, 1 Sep 2020 12:29:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 122C26B00AA; Tue, 1 Sep 2020 12:29:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id EBFDA6B00A8 for ; Tue, 1 Sep 2020 12:29:13 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BAC4A180AD801 for ; Tue, 1 Sep 2020 16:29:13 +0000 (UTC) X-FDA: 77215027386.25.stop52_3c1365227099 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 862BE1804E3A9 for ; Tue, 1 Sep 2020 16:29:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dhowells@redhat.com,,RULES_HIT:30036:30054:30074,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8mrbe3pargjmtbp7wip3e9ou5zoc7s8sy5ba1i93t4ugbyjfkwy4jpk85jcz.4598p91ef6iemadjcm6zqm5zjeixmm4rth7dxnmu5grsx33ruxqqjdk6h5eit6n.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: stop52_3c1365227099 X-Filterd-Recvd-Size: 4867 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 16:29:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598977752; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EKeTQTlHEH1DvrHRoJ98xax44gVFHErnwIJFKqA+kZE=; b=FSqYBgSGvi/Beco3sU89gD4Jlxm5hfh5G+oALUn28C5HKhL32Nz2kpCmFn/X1xCoYczskO n1jybJF3evkdMhZQPmj/6bw8TsKtr8YCHv8uhxtrlpFkGNjJqk0g0EZtukd9y1IA29hPDq 1G3kqhokGBdOZWSrW90NVB4kpoXnYzQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-30-zK7EKLIlO6u-8mZrVFZPSQ-1; Tue, 01 Sep 2020 12:29:10 -0400 X-MC-Unique: zK7EKLIlO6u-8mZrVFZPSQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ABA9680572E; Tue, 1 Sep 2020 16:29:08 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-113-231.rdu2.redhat.com [10.10.113.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id AC1915D9CC; Tue, 1 Sep 2020 16:29:07 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 7/7] mm: Pass a file_ra_state struct into force_page_cache_readahead() From: David Howells To: willy@infradead.org Cc: dhowells@redhat.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Tue, 01 Sep 2020 17:29:06 +0100 Message-ID: <159897774687.405783.6157146299031279302.stgit@warthog.procyon.org.uk> In-Reply-To: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> References: <159897769535.405783.17587409235571100774.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Rspamd-Queue-Id: 862BE1804E3A9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass a file_ra_state struct into force_page_cache_readahead(). One caller has one that should be passed in and the other doesn't, but the former needs to pass its in. Signed-off-by: David Howells --- mm/fadvise.c | 2 +- mm/internal.h | 2 +- mm/readahead.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index b68d2f2959d5..2f1550279757 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -107,7 +107,7 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) { DEFINE_READAHEAD(rac, file, mapping, start_index); rac._nr_pages = nrpages; - force_page_cache_readahead(&rac); + force_page_cache_readahead(&rac, &rac.file->f_ra); } break; case POSIX_FADV_NOREUSE: diff --git a/mm/internal.h b/mm/internal.h index de3b2ce2743a..977ad7d81b1b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -49,7 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, struct zap_details *details); -void force_page_cache_readahead(struct readahead_control *); +void force_page_cache_readahead(struct readahead_control *, struct file_ra_state *); void __do_page_cache_readahead(struct readahead_control *, unsigned long); /** diff --git a/mm/readahead.c b/mm/readahead.c index 28ff80304a21..b001720c13aa 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -273,11 +273,11 @@ void __do_page_cache_readahead(struct readahead_control *rac, * Chunk the readahead into 2 megabyte units, so that we don't pin too much * memory at once. */ -void force_page_cache_readahead(struct readahead_control *rac) +void force_page_cache_readahead(struct readahead_control *rac, + struct file_ra_state *ra) { struct address_space *mapping = rac->mapping; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - struct file_ra_state *ra = &rac->file->f_ra; unsigned long max_pages, index, nr_to_read; if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && @@ -657,7 +657,7 @@ void page_cache_sync_readahead(struct readahead_control *rac, /* be dumb */ if (rac->file && (rac->file->f_mode & FMODE_RANDOM)) { - force_page_cache_readahead(rac); + force_page_cache_readahead(rac, ra); return; }