From patchwork Fri Aug 14 09:03:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaoyang Huang X-Patchwork-Id: 11713865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ACD17109B for ; Fri, 14 Aug 2020 09:03:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 674E120866 for ; Fri, 14 Aug 2020 09:03:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="luMV1emV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 674E120866 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F9716B000C; Fri, 14 Aug 2020 05:03:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 683206B000D; Fri, 14 Aug 2020 05:03:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54A8C6B000E; Fri, 14 Aug 2020 05:03:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 3C3556B000C for ; Fri, 14 Aug 2020 05:03:55 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F10976D8C for ; Fri, 14 Aug 2020 09:03:54 +0000 (UTC) X-FDA: 77148586788.12.coil17_5c01f5726ffb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id C31D518028B61 for ; Fri, 14 Aug 2020 09:03:54 +0000 (UTC) X-Spam-Summary: 1,0,0,263771964a482034,d41d8cd98f00b204,huangzhaoyang@gmail.com,,RULES_HIT:1:2:41:355:379:421:541:800:960:973:982:988:989:1260:1345:1381:1437:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2553:2559:2562:2693:2895:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4051:4250:4321:4605:5007:6117:6121:6261:6653:7901:7903:8603:8957:9413:10004:10128:10394:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13184:13229:14096:14394:14687:14915:21080:21324:21433:21444:21451:21611:21627:21666:21740:21789:21990:30054:30070:30090,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yfnrwefq6okrz1g6u1k78g59cafypzh7bm64ejwzwscindprogbsfrurz3u5d.36bbx91885unf6c3u358xkhuki6sdbwonm6f8zp6yrnzcz7t367nrrxs6psnhih.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: coil17_5c01f5726ffb X-Filterd-Recvd-Size: 10811 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 09:03:54 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id q19so3915271pll.0 for ; Fri, 14 Aug 2020 02:03:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id; bh=KZ6TfuzfQX04c1As19Dure3MGdLnr2i+NpWIm6fscxs=; b=luMV1emVb5akmwN1YMDqrJ/pK/NWCdDw9KECcZ01E1rRZ3DnERYTRtafdoCQTP65Sp z7NT9rVGhXK7jGftEMPRfcF3Y6hjdBggQXYuVnmRM6Be69ed6EQNXnUHMOdkR7Zssq5k hw0qithOuFnaUFXFUdn925bli3xhvoVMa990XvQjcoMQ8lfFAubdZWS6pdTZp5Imfki6 R+DP/cgfCJGTO55PZVSwnprBe0veMUgytMQ+GxHOdwh1GD9RKdqvx1p5U15i+HJ34A6L 3Z7p0re1XqYzWmcgzisCFqfipxRKScykVSt2FIAyLcXRqI5JkxiuulcrgRPgfic9v9Zk kevw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id; bh=KZ6TfuzfQX04c1As19Dure3MGdLnr2i+NpWIm6fscxs=; b=uNpfBy7ldnfQzDNPP+yjJU3KvdeMNwipw8R3vUnEhtPbS2QiThiluBiQwssjdUmA0o afMVl9xBHmw40mA7MXRCfUtkYbi/OfDns3kZLCoZDu29u4RwZEV8ZgV2DGdeIvketlAA Pw8UISKZ1YqBwo2UiU4yzW5kiMGBAP7eV6SjM8T00oG1Zo1TarHS/x+OKxIBvkdEEF6H xsjRYCfSiuEQIuuMso6mlM9iIvNq3l03h/BmZkkHFqTqmb1GF3fJ+uw67yupCqQ1MoI1 bgnStAlbsOkMSJ5A0srHIsq0Bs0whBFIjbJ63G0+RYQtbvaOY7ADQvS0gVeSIf2v/TT7 wwTg== X-Gm-Message-State: AOAM530rHhroJxllR/UsKodRnHwTPYyUthwze5sNxaRfdlZQusbHuVxA cOSIMicVcItRAqZ2ALa7SVI= X-Google-Smtp-Source: ABdhPJxNt3NnC7MpPX+Mhunk3yqA3d9s9n35udIiOlD+xu2FEH5YD3jotQKjyku7wbjs72016PxuVQ== X-Received: by 2002:a17:90b:154:: with SMTP id em20mr1529085pjb.173.1597395833172; Fri, 14 Aug 2020 02:03:53 -0700 (PDT) Received: from bj03382pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id a16sm7743548pgj.27.2020.08.14.02.03.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Aug 2020 02:03:52 -0700 (PDT) From: Zhaoyang Huang X-Google-Original-From: Zhaoyang Huang To: Roman Gushchin , Andrew Morton , Zhaoyang Huang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm : sync ra->ra_pages with bdi->ra_pages Date: Fri, 14 Aug 2020 17:03:44 +0800 Message-Id: <1597395824-3325-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.7.9.5 X-Rspamd-Queue-Id: C31D518028B61 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some system(like android) will turbo read during startup via expanding the readahead window and then set it back to normal(128kb as usual). However, some files in the system process context will keep to be opened since it is opened up and has no chance to sync with the updated value as it is almost impossible to change the files attached to the inode(processes are unaware of these things) We sync ra->ra_pages with bdi->ra_pages when read. Furthermore, in consideration of the scenario of eio and fadvise(...,POSIX_FADV_SEQUENTIAL).We introduce a seq_read_fact to record the factors of above two cases. Signed-off-by: Zhaoyang Huang --- include/linux/fs.h | 17 +++++++++++++++++ mm/fadvise.c | 4 +++- mm/filemap.c | 19 +++++++++++++------ mm/readahead.c | 38 ++++++++++++++++++++++++++++++++++---- 4 files changed, 67 insertions(+), 11 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index dd28e76..e3cdc5a 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -66,6 +66,7 @@ struct fscrypt_operations; struct fs_context; struct fs_parameter_description; +struct file_ra_state; extern void __init inode_init(void); extern void __init inode_init_early(void); @@ -81,6 +82,7 @@ extern int sysctl_protected_hardlinks; extern int sysctl_protected_fifos; extern int sysctl_protected_regular; +extern void ra_pages_sync(struct file_ra_state *ra, struct address_space *mapping); typedef __kernel_rwf_t rwf_t; @@ -900,11 +902,26 @@ struct file_ra_state { there are only # of pages ahead */ unsigned int ra_pages; /* Maximum readahead window */ + int seq_read_fact; /* turbo factor of sequential read */ unsigned int mmap_miss; /* Cache miss stat for mmap accesses */ loff_t prev_pos; /* Cache last read() position */ }; /* + * ra->seq_read_fact == -1 indicates eio happens + */ +#define RA_PAGES(ra) \ +({ \ + unsigned int ra_pages; \ + if (ra->seq_read_fact != -1) \ + ra_pages = ra->ra_pages * ra->seq_read_fact; \ + else \ + ra_pages = ra->ra_pages; \ + ra_pages; \ +}) + + +/* * Check if @index falls in the readahead windows. */ static inline int ra_has_index(struct file_ra_state *ra, pgoff_t index) diff --git a/mm/fadvise.c b/mm/fadvise.c index 467bcd0..b06e3ca 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -78,6 +78,7 @@ static int generic_fadvise(struct file *file, loff_t offset, loff_t len, switch (advice) { case POSIX_FADV_NORMAL: file->f_ra.ra_pages = bdi->ra_pages; + file->f_ra.seq_read_fact = 1; spin_lock(&file->f_lock); file->f_mode &= ~FMODE_RANDOM; spin_unlock(&file->f_lock); @@ -88,7 +89,8 @@ static int generic_fadvise(struct file *file, loff_t offset, loff_t len, spin_unlock(&file->f_lock); break; case POSIX_FADV_SEQUENTIAL: - file->f_ra.ra_pages = bdi->ra_pages * 2; + file->f_ra.ra_pages = bdi->ra_pages; + file->f_ra.seq_read_fact = 2; spin_lock(&file->f_lock); file->f_mode &= ~FMODE_RANDOM; spin_unlock(&file->f_lock); diff --git a/mm/filemap.c b/mm/filemap.c index d78f577..425d2a2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2048,6 +2048,7 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, static void shrink_readahead_size_eio(struct file *filp, struct file_ra_state *ra) { + ra->seq_read_fact = -1; ra->ra_pages /= 4; } @@ -2473,13 +2474,16 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ) return fpin; - if (!ra->ra_pages) + if (!RA_PAGES(ra)) return fpin; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + if (vmf->vma->vm_flags & VM_SEQ_READ) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); page_cache_sync_readahead(mapping, ra, file, offset, - ra->ra_pages); + RA_PAGES(ra)); return fpin; } @@ -2498,9 +2502,9 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * mmap read-around */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - ra->start = max_t(long, 0, offset - ra->ra_pages / 2); - ra->size = ra->ra_pages; - ra->async_size = ra->ra_pages / 4; + ra->start = max_t(long, 0, offset - RA_PAGES(ra) / 2); + ra->size = RA_PAGES(ra); + ra->async_size = RA_PAGES(ra) / 4; ra_submit(ra, mapping, file); return fpin; } @@ -2519,6 +2523,9 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, struct file *fpin = NULL; pgoff_t offset = vmf->pgoff; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + /* If we don't want any read-ahead, don't bother */ if (vmf->vma->vm_flags & VM_RAND_READ) return fpin; @@ -2527,7 +2534,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, if (PageReadahead(page)) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); page_cache_async_readahead(mapping, ra, file, - page, offset, ra->ra_pages); + page, offset, RA_PAGES(ra)); } return fpin; } diff --git a/mm/readahead.c b/mm/readahead.c index a459365..e994c5a 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -32,10 +32,27 @@ file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping) { ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages; + ra->seq_read_fact = 1; ra->prev_pos = -1; } EXPORT_SYMBOL_GPL(file_ra_state_init); +/*sync ra->ra_pages with bdi->ra_pages*/ +void ra_pages_sync(struct file_ra_state *ra, + struct address_space *mapping) +{ + unsigned int ra_pages = ra->ra_pages; + if (ra->seq_read_fact == -1) + return; + + ra_pages = inode_to_bdi(mapping->host)->ra_pages * ra->seq_read_fact; + if (RA_PAGES(ra) != ra_pages) { + ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages; + } + return; +} +EXPORT_SYMBOL_GPL(ra_pages_sync); + /* * see if a page needs releasing upon read_cache_pages() failure * - the caller of read_cache_pages() may have set PG_private or PG_fscache @@ -228,11 +245,14 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp, if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages)) return -EINVAL; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + /* * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size */ - max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); + max_pages = max_t(unsigned long, bdi->io_pages, RA_PAGES(ra)); nr_to_read = min(nr_to_read, max_pages); while (nr_to_read) { unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE; @@ -384,10 +404,14 @@ static int try_context_readahead(struct address_space *mapping, unsigned long req_size) { struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - unsigned long max_pages = ra->ra_pages; + unsigned long max_pages; unsigned long add_pages; pgoff_t prev_offset; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + + max_pages = RA_PAGES(ra); /* * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size @@ -510,9 +534,12 @@ void page_cache_sync_readahead(struct address_space *mapping, pgoff_t offset, unsigned long req_size) { /* no read-ahead */ - if (!ra->ra_pages) + if (!RA_PAGES(ra)) return; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + if (blk_cgroup_congested()) return; @@ -549,9 +576,12 @@ void page_cache_sync_readahead(struct address_space *mapping, unsigned long req_size) { /* no read-ahead */ - if (!ra->ra_pages) + if (!RA_PAGES(ra)) return; + /* sync ra->ra_pages with bdi->ra_pages*/ + ra_pages_sync(ra, mapping); + /* * Same bit is used for PG_readahead and PG_reclaim. */