From patchwork Wed May 30 09:58:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10438133 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0A731601E9 for ; Wed, 30 May 2018 09:58:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE5F02818E for ; Wed, 30 May 2018 09:58:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3688283CB; Wed, 30 May 2018 09:58:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B8B62818E for ; Wed, 30 May 2018 09:58:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751623AbeE3J6f (ORCPT ); Wed, 30 May 2018 05:58:35 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:34578 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751598AbeE3J6e (ORCPT ); Wed, 30 May 2018 05:58:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NSp1zE7kUXSRWhvouw0Zy1M97rZHkGMekfVGV6bNo/U=; b=EHfkyeiuw/mRKNd2jgmzWKWNl nXs8LYYdfWy9Cjjyxewl5iBSUu4e+Bb/wb/GLVbJerVIJ7CR4U5RyE87Cxbj5efLWfhphMd9y4TgB 7//2slkynHc5YjJp8ljPK39D1Nd2eRK1/QVgc6kebBMoFO+QOb0g9Zn99rLiCXnlhj3X37WKM3lHi AbXAAiNjJo2RhuvFoxMnYYZthGwz/1QABfid8jO+2Rq+fIfwbHd2i3fhfnip70MGj6sTsNUKwBazf /zmO65HUr+m7vHqmIFJXqmiW0FRn50BYv4/uM7BNnk8JjcEoSTmKXFUocNTmiNMhUiJbxgFhe+s7p CokzMtjiQ==; Received: from 213-225-38-123.nat.highway.a1.net ([213.225.38.123] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fNxsH-0006UF-6G; Wed, 30 May 2018 09:58:33 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 04/13] mm: split ->readpages calls to avoid non-contiguous pages lists Date: Wed, 30 May 2018 11:58:04 +0200 Message-Id: <20180530095813.31245-5-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180530095813.31245-1-hch@lst.de> References: <20180530095813.31245-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP That way file systems don't have to go spotting for non-contiguous pages and work around them. It also kicks off I/O earlier, allowing it to finish earlier and reduce latency. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner --- mm/readahead.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index fa4d4b767130..e273f0de3376 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -140,8 +140,8 @@ static int read_pages(struct address_space *mapping, struct file *filp, } /* - * __do_page_cache_readahead() actually reads a chunk of disk. It allocates all - * the pages first, then submits them all for I/O. This avoids the very bad + * __do_page_cache_readahead() actually reads a chunk of disk. It allocates + * the pages first, then submits them for I/O. This avoids the very bad * behaviour which would occur if page allocations are causing VM writeback. * We really don't want to intermingle reads and writes like that. * @@ -177,8 +177,18 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping, rcu_read_lock(); page = radix_tree_lookup(&mapping->i_pages, page_offset); rcu_read_unlock(); - if (page && !radix_tree_exceptional_entry(page)) + if (page && !radix_tree_exceptional_entry(page)) { + /* + * Page already present? Kick off the current batch of + * contiguous pages before continuing with the next + * batch. + */ + if (nr_pages) + read_pages(mapping, filp, &page_pool, nr_pages, + gfp_mask); + nr_pages = 0; continue; + } page = __page_cache_alloc(gfp_mask); if (!page)