From patchwork Fri Feb 26 01:15:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12105345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E127AC433DB for ; Fri, 26 Feb 2021 01:15:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5754164EE4 for ; Fri, 26 Feb 2021 01:15:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5754164EE4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BED7E6B0006; Thu, 25 Feb 2021 20:15:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F1E6B006C; Thu, 25 Feb 2021 20:15:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F9766B006E; Thu, 25 Feb 2021 20:15:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 7B1F46B0006 for ; Thu, 25 Feb 2021 20:15:28 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 35DF7D221 for ; Fri, 26 Feb 2021 01:15:28 +0000 (UTC) X-FDA: 77858651136.16.80BEA42 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id 31DEAE4 for ; Fri, 26 Feb 2021 01:15:26 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0190364EE1; Fri, 26 Feb 2021 01:15:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1614302126; bh=mAXHuw68jKfGEcqhuOgPAPFnqCvQu/KRezWzAMPsR/Q=; h=Date:From:To:Subject:In-Reply-To:From; b=h/yrAmym+1SMOavDpoe1jZah+wd+k/evHWLVInfVx1LigtkqhjaGYWInTdkU0/0BX nIJ4lvg6A6X9Zx1SMPSwN+UdrxvrJ3Y260p6rQPSBNTn0miwu45JVI1Tx4NE2GDT6I 3FdW+2V89H92PDNBCsdvMGflQr+sJ1tVPPSA1qvw= Date: Thu, 25 Feb 2021 17:15:25 -0800 From: Andrew Morton To: akpm@linux-foundation.org, dchinner@redhat.com, hannes@cmpxchg.org, hch@lst.de, hughd@google.com, jack@suse.cz, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, william.kucharski@oracle.com, willy@infradead.org, yang.shi@linux.alibaba.com Subject: [patch 001/118] mm: make pagecache tagged lookups return only head pages Message-ID: <20210226011525.jSx19jby_%akpm@linux-foundation.org> In-Reply-To: <20210225171452.713967e96554bb6a53e44a19@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 31DEAE4 X-Stat-Signature: q5jmgzde3q41pmbtzj1ziczzqmh3jnxr Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614302126-866732 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm: make pagecache tagged lookups return only head pages Patch series "Overhaul multi-page lookups for THP", v4. This THP prep patchset changes several page cache iteration APIs to only return head pages. - It's only possible to tag head pages in the page cache, so only return head pages, not all their subpages. - Factor a lot of common code out of the various batch lookup routines - Add mapping_seek_hole_data() - Unify find_get_entries() and pagevec_lookup_entries() - Make find_get_entries only return head pages, like find_get_entry(). These are only loosely connected, but they seem to make sense together as a series. This patch (of 14): Pagecache tags are used for dirty page writeback. Since dirtiness is tracked on a per-THP basis, we only want to return the head page rather than each subpage of a tagged page. All the filesystems which use huge pages today are in-memory, so there are no tagged huge pages today. Link: https://lkml.kernel.org/r/20201112212641.27837-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Cc: Hugh Dickins Cc: Johannes Weiner Cc: Yang Shi Cc: Dave Chinner Cc: Kirill A. Shutemov Signed-off-by: Andrew Morton --- mm/filemap.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) --- a/mm/filemap.c~mm-make-pagecache-tagged-lookups-return-only-head-pages +++ a/mm/filemap.c @@ -2062,7 +2062,7 @@ retry: EXPORT_SYMBOL(find_get_pages_contig); /** - * find_get_pages_range_tag - find and return pages in given range matching @tag + * find_get_pages_range_tag - Find and return head pages matching @tag. * @mapping: the address_space to search * @index: the starting page index * @end: The final page index (inclusive) @@ -2070,8 +2070,9 @@ EXPORT_SYMBOL(find_get_pages_contig); * @nr_pages: the maximum number of pages * @pages: where the resulting pages are placed * - * Like find_get_pages, except we only return pages which are tagged with - * @tag. We update @index to index the next page for the traversal. + * Like find_get_pages(), except we only return head pages which are tagged + * with @tag. @index is updated to the index immediately after the last + * page we return, ready for the next iteration. * * Return: the number of pages which were found. */ @@ -2105,9 +2106,9 @@ unsigned find_get_pages_range_tag(struct if (unlikely(page != xas_reload(&xas))) goto put_page; - pages[ret] = find_subpage(page, xas.xa_index); + pages[ret] = page; if (++ret == nr_pages) { - *index = xas.xa_index + 1; + *index = page->index + thp_nr_pages(page); goto out; } continue;