From patchwork Fri Dec 20 01:58:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11304669 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D75946C1 for ; Fri, 20 Dec 2019 01:58:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A066524680 for ; Fri, 20 Dec 2019 01:58:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A066524680 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E24858E0188; Thu, 19 Dec 2019 20:58:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DD5478E0184; Thu, 19 Dec 2019 20:58:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC32F8E0188; Thu, 19 Dec 2019 20:58:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id B78B68E0184 for ; Thu, 19 Dec 2019 20:58:42 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 6B9C9180AD807 for ; Fri, 20 Dec 2019 01:58:42 +0000 (UTC) X-FDA: 76283860884.25.songs29_119e213d62d30 X-Spam-Summary: 2,0,0,595ffe7ce18b11fe,d41d8cd98f00b204,hdanton@sina.com,:linux-kernel@vger.kernel.org::hdanton@sina.com,RULES_HIT:41:355:379:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2901:3138:3139:3140:3141:3142:3353:3865:3866:3867:3871:3872:4250:4385:5007:6261:7875:8603:10004:11334:11537:11658:11914:12296:12297:12664:13069:13311:13357:13894:14096:14181:14384:14721:21080:21627:21990:30054:30070,0,RBL:202.108.3.18:@sina.com:.lbl8.mailshell.net-62.18.2.100 64.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: songs29_119e213d62d30 X-Filterd-Recvd-Size: 2259 Received: from r3-18.sinamail.sina.com.cn (r3-18.sinamail.sina.com.cn [202.108.3.18]) by imf03.hostedemail.com (Postfix) with SMTP for ; Fri, 20 Dec 2019 01:58:40 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([221.219.1.122]) by sina.com with ESMTP id 5DFC2ACC00002FE8; Fri, 20 Dec 2019 09:58:38 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 49987115073704 From: Hillf Danton To: linux-kernel Cc: linux-mm , Hillf Danton Subject: [RFC] mm: readahead: change ra size for random read Date: Fri, 20 Dec 2019 09:58:27 +0800 Message-Id: <20191220015827.8904-1-hdanton@sina.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.094491, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Set a smaller ra size for random read than contiguous one. It is deemed random if the lower-level device is unable to meet the read pattern even with its IO capability doubled. Signed-off-by: Hillf Danton --- a/mm/readahead.c +++ p/mm/readahead.c @@ -388,6 +388,7 @@ ondemand_readahead(struct address_space unsigned long max_pages = ra->ra_pages; unsigned long add_pages; pgoff_t prev_offset; + bool random = true; /* * If the request exceeds the readahead window, allow the read to @@ -399,8 +400,34 @@ ondemand_readahead(struct address_space /* * start of file */ - if (!offset) - goto initial_readahead; + if (!offset) { +fill_ra: + ra->start = offset; + ra->size = random ? + min(bdi->io_pages, bdi->ra_pages) : + max(bdi->io_pages, bdi->ra_pages); + + ra->async_size = ra->size > req_size ? + ra->size - req_size : ra->size; + + return ra_submit(ra, mapping, filp); + } else { + unsigned long leap; + + if (offset > ra->start) + leap = offset - ra->start; + else + leap = ra->start - offset; + + /* + * anything other than page cache cannot help if it is + * too great a leap for the lower-level device to back + * up so feel free to put ra into fire + */ + random = leap > max(bdi->io_pages, bdi->ra_pages) * 2; + + goto fill_ra; + } /* * It's the expected callback offset, assume sequential access.