From patchwork Thu Jul 15 03:34:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7CADC07E96 for ; Thu, 15 Jul 2021 03:38:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B67F61396 for ; Thu, 15 Jul 2021 03:38:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B67F61396 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8992A6B00E5; Wed, 14 Jul 2021 23:38:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 86FEE6B00E6; Wed, 14 Jul 2021 23:38:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 711556B00E7; Wed, 14 Jul 2021 23:38:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 4D41F6B00E5 for ; Wed, 14 Jul 2021 23:38:30 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2D3188248047 for ; Thu, 15 Jul 2021 03:38:29 +0000 (UTC) X-FDA: 78363414738.06.11B71CB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 6F6C6D006E16 for ; Thu, 15 Jul 2021 03:38:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8j8+M+tarv1oMmWw+KoHzC2tDU/COLzabtCIL6I+rtw=; b=oS+lsH1zu4KTYlrQ0ddGVjTP5d K1ko/sbDcsw7HrLDrNlT+AQYlOiap8eRlGZsx6NG7IhrWEWlCNzGjDo5AmKh+SK3ZLVdf1v+OSjRS Ciq3SoI+hNYq1a6gV+dgn1c698MLbloM1r3j1y58jQRQcL4UpGLMD5moM6lhL4ExqhrYOzrcO5/u7 u7m+EvXx6e0dSSrOO0SiktabLoDIb4gCCNYv7SairNnxtbfkGjmgR/zRa71cHuk6rASCdLcMuBj7/ UJnMK11h9+QQimz/BodYwhmbPzm1/TOWoUCJjBBPiRwwFPkJCwRWtYJYKJ/57Zbr/JKEwkJDZjnld 0Do89eEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sBo-002uII-6d; Thu, 15 Jul 2021 03:37:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , "Kirill A . Shutemov" Subject: [PATCH v14 001/138] mm: Convert get_page_unless_zero() to return bool Date: Thu, 15 Jul 2021 04:34:47 +0100 Message-Id: <20210715033704.692967-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oS+lsH1z; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6F6C6D006E16 X-Stat-Signature: hxsnz4jschhrprdwzkzjh6mpkzrj9e8p X-HE-Tag: 1626320308-470711 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: atomic_add_unless() returns bool, so remove the widening casts to int in page_ref_add_unless() and get_page_unless_zero(). This causes gcc to produce slightly larger code in isolate_migratepages_block(), but it's not clear that it's worse code. Net +19 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 2 +- include/linux/page_ref.h | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7ca22e6e694a..8dd65290bac0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -755,7 +755,7 @@ static inline int put_page_testzero(struct page *page) * This can be called when MMU is off so it must not access * any of the virtual mappings. */ -static inline int get_page_unless_zero(struct page *page) +static inline bool get_page_unless_zero(struct page *page) { return page_ref_add_unless(page, 1, 0); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 7ad46f45df39..3a799de8ad52 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -161,9 +161,9 @@ static inline int page_ref_dec_return(struct page *page) return ret; } -static inline int page_ref_add_unless(struct page *page, int nr, int u) +static inline bool page_ref_add_unless(struct page *page, int nr, int u) { - int ret = atomic_add_unless(&page->_refcount, nr, u); + bool ret = atomic_add_unless(&page->_refcount, nr, u); if (page_ref_tracepoint_active(page_ref_mod_unless)) __page_ref_mod_unless(page, nr, ret); From patchwork Thu Jul 15 03:34:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F4EC47E48 for ; Thu, 15 Jul 2021 03:39:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9321861260 for ; Thu, 15 Jul 2021 03:39:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9321861260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E8C7F6B00E7; Wed, 14 Jul 2021 23:39:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E61A96B00E8; Wed, 14 Jul 2021 23:39:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2A016B00E9; Wed, 14 Jul 2021 23:39:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id A9F2A6B00E7 for ; Wed, 14 Jul 2021 23:39:10 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8B9322194F for ; Thu, 15 Jul 2021 03:39:09 +0000 (UTC) X-FDA: 78363416418.26.93AC6DD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 053CA9000709 for ; Thu, 15 Jul 2021 03:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YAqeh0C8qmQsEJqIBNEgXfnLun7FdKZhk6Ud5nhuHqo=; b=Y991KGkeu5WYFyEUEZkNUg8YJx WGouXEnlWBO8LyU7wksO9eyaPEMTd4BDUQSnF0dYm62eyxYTZTKNPqjktm0GCS/zDxTSNyjYwi4od aHxxA8Azu6NJgUW5bCeFzzp93+PaVmziNqv0REyyFp4AOIztsMZZCGqX79e36a/p3SUjQEPk7DSP+ v0ZJX4FBAyw0B50HndKZ2HOYv1B10426Jn+zFDhwStHF2Nt6FM1dQbS7ISrfKLEciJ9tmBablIkuH 9eURFx1AuP0g6hOsDtLQpj6e8gphf8habrTcTyRRxEIXz7ilgl4SUdBlIwUdQCDDD8CQkrXSuQPca 9Uu1010g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sCC-002uIR-1E; Thu, 15 Jul 2021 03:38:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , Christoph Hellwig , David Howells Subject: [PATCH v14 002/138] mm: Introduce struct folio Date: Thu, 15 Jul 2021 04:34:48 +0100 Message-Id: <20210715033704.692967-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y991KGke; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 16k3hnyi15k14wbmjrkqxjz7cywged54 X-Rspamd-Queue-Id: 053CA9000709 X-HE-Tag: 1626320347-403754 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A struct folio is a new abstraction to replace the venerable struct page. A function which takes a struct folio argument declares that it will operate on the entire (possibly compound) page, not just PAGE_SIZE bytes. In return, the caller guarantees that the pointer it is passing does not point to a tail page. No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Mike Rapoport --- Documentation/core-api/mm-api.rst | 1 + include/linux/mm.h | 74 +++++++++++++++++++++++++++++++ include/linux/mm_types.h | 60 +++++++++++++++++++++++++ include/linux/page-flags.h | 28 ++++++++++++ 4 files changed, 163 insertions(+) diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index a42f9baddfbf..2a94e6164f80 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -95,6 +95,7 @@ More Memory Management Functions .. kernel-doc:: mm/mempolicy.c .. kernel-doc:: include/linux/mm_types.h :internal: +.. kernel-doc:: include/linux/page-flags.h .. kernel-doc:: include/linux/mm.h :internal: .. kernel-doc:: include/linux/mmzone.h diff --git a/include/linux/mm.h b/include/linux/mm.h index 8dd65290bac0..5071084a71b9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -949,6 +949,20 @@ static inline unsigned int compound_order(struct page *page) return page[1].compound_order; } +/** + * folio_order - The allocation order of a folio. + * @folio: The folio. + * + * A folio is composed of 2^order pages. See get_order() for the definition + * of order. + * + * Return: The order of the folio. + */ +static inline unsigned int folio_order(struct folio *folio) +{ + return compound_order(&folio->page); +} + static inline bool hpage_pincount_available(struct page *page) { /* @@ -1594,6 +1608,65 @@ static inline void set_page_links(struct page *page, enum zone_type zone, #endif } +/** + * folio_nr_pages - The number of pages in the folio. + * @folio: The folio. + * + * Return: A number which is a power of two. + */ +static inline unsigned long folio_nr_pages(struct folio *folio) +{ + return compound_nr(&folio->page); +} + +/** + * folio_next - Move to the next physical folio. + * @folio: The folio we're currently operating on. + * + * If you have physically contiguous memory which may span more than + * one folio (eg a &struct bio_vec), use this function to move from one + * folio to the next. Do not use it if the memory is only virtually + * contiguous as the folios are almost certainly not adjacent to each + * other. This is the folio equivalent to writing ``page++``. + * + * Context: We assume that the folios are refcounted and/or locked at a + * higher level and do not adjust the reference counts. + * Return: The next struct folio. + */ +static inline struct folio *folio_next(struct folio *folio) +{ + return (struct folio *)folio_page(folio, folio_nr_pages(folio)); +} + +/** + * folio_shift - The number of bits covered by this folio. + * @folio: The folio. + * + * A folio contains a number of bytes which is a power-of-two in size. + * This function tells you which power-of-two the folio is. + * + * Context: The caller should have a reference on the folio to prevent + * it from being split. It is not necessary for the folio to be locked. + * Return: The base-2 logarithm of the size of this folio. + */ +static inline unsigned int folio_shift(struct folio *folio) +{ + return PAGE_SHIFT + folio_order(folio); +} + +/** + * folio_size - The number of bytes in a folio. + * @folio: The folio. + * + * Context: The caller should have a reference on the folio to prevent + * it from being split. It is not necessary for the folio to be locked. + * Return: The number of bytes in this folio. + */ +static inline size_t folio_size(struct folio *folio) +{ + return PAGE_SIZE << folio_order(folio); +} + /* * Some inline functions in vmstat.h depend on page_zone() */ @@ -1699,6 +1772,7 @@ extern void pagefault_out_of_memory(void); #define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) #define offset_in_thp(page, p) ((unsigned long)(p) & (thp_size(page) - 1)) +#define offset_in_folio(folio, p) ((unsigned long)(p) & (folio_size(folio) - 1)) /* * Flags passed to show_mem() and show_free_areas() to suppress output in diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 52bbd2b7cb46..f023eaa866fe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -231,6 +231,66 @@ struct page { #endif } _struct_page_alignment; +/** + * struct folio - Represents a contiguous set of bytes. + * @flags: Identical to the page flags. + * @lru: Least Recently Used list; tracks how recently this folio was used. + * @mapping: The file this page belongs to, or refers to the anon_vma for + * anonymous pages. + * @index: Offset within the file, in units of pages. For anonymous pages, + * this is the index from the beginning of the mmap. + * @private: Filesystem per-folio data (see folio_attach_private()). + * Used for swp_entry_t if folio_test_swapcache(). + * @_mapcount: Do not access this member directly. Use folio_mapcount() to + * find out how many times this folio is mapped by userspace. + * @_refcount: Do not access this member directly. Use folio_ref_count() + * to find how many references there are to this folio. + * @memcg_data: Memory Control Group data. + * + * A folio is a physically, virtually and logically contiguous set + * of bytes. It is a power-of-two in size, and it is aligned to that + * same power-of-two. It is at least as large as %PAGE_SIZE. If it is + * in the page cache, it is at a file offset which is a multiple of that + * power-of-two. It may be mapped into userspace at an address which is + * at an arbitrary page offset, but its kernel virtual address is aligned + * to its size. + */ +struct folio { + /* private: don't document the anon union */ + union { + struct { + /* public: */ + unsigned long flags; + struct list_head lru; + struct address_space *mapping; + pgoff_t index; + void *private; + atomic_t _mapcount; + atomic_t _refcount; +#ifdef CONFIG_MEMCG + unsigned long memcg_data; +#endif + /* private: the union with struct page is transitional */ + }; + struct page page; + }; +}; + +static_assert(sizeof(struct page) == sizeof(struct folio)); +#define FOLIO_MATCH(pg, fl) \ + static_assert(offsetof(struct page, pg) == offsetof(struct folio, fl)) +FOLIO_MATCH(flags, flags); +FOLIO_MATCH(lru, lru); +FOLIO_MATCH(compound_head, lru); +FOLIO_MATCH(index, index); +FOLIO_MATCH(private, private); +FOLIO_MATCH(_mapcount, _mapcount); +FOLIO_MATCH(_refcount, _refcount); +#ifdef CONFIG_MEMCG +FOLIO_MATCH(memcg_data, memcg_data); +#endif +#undef FOLIO_MATCH + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 5922031ffab6..70ede8345538 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -191,6 +191,34 @@ static inline unsigned long _compound_head(const struct page *page) #define compound_head(page) ((typeof(page))_compound_head(page)) +/** + * page_folio - Converts from page to folio. + * @p: The page. + * + * Every page is part of a folio. This function cannot be called on a + * NULL pointer. + * + * Context: No reference, nor lock is required on @page. If the caller + * does not hold a reference, this call may race with a folio split, so + * it should re-check the folio still contains this page after gaining + * a reference on the folio. + * Return: The folio which contains this page. + */ +#define page_folio(p) (_Generic((p), \ + const struct page *: (const struct folio *)_compound_head(p), \ + struct page *: (struct folio *)_compound_head(p))) + +/** + * folio_page - Return a page from a folio. + * @folio: The folio. + * @n: The page number to return. + * + * @n is relative to the start of the folio. This function does not + * check that the page number lies within @folio; the caller is presumed + * to have a reference to the page. + */ +#define folio_page(folio, n) nth_page(&(folio)->page, n) + static __always_inline int PageTail(struct page *page) { return READ_ONCE(page->compound_head) & 1; From patchwork Thu Jul 15 03:34:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 853D4C47E48 for ; Thu, 15 Jul 2021 03:39:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A49E6128B for ; Thu, 15 Jul 2021 03:39:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A49E6128B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 821F46B00E9; Wed, 14 Jul 2021 23:39:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F7C46B00EA; Wed, 14 Jul 2021 23:39:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6994D6B00EB; Wed, 14 Jul 2021 23:39:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 483B16B00E9 for ; Wed, 14 Jul 2021 23:39:59 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 382BE2197C for ; Thu, 15 Jul 2021 03:39:58 +0000 (UTC) X-FDA: 78363418476.04.66C39A6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id D7101300010A for ; Thu, 15 Jul 2021 03:39:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=B8pcrYR+hNLPq7sa1dArlFf92eyhuj070BMxPGyPphc=; b=A00dpqNer+TANE5sSbs0rKjs55 FaVy+I5QoJC2eBkmbTCR9ZtAwBtE0WmMwV4DTeXv50m8RKBaSmm9duRpIBHe9a5IVYgbsL47LaBfv S9X+G+XwfQwMf0sj3SOv3MKnHTtZVFweRUJNdVtS14eHGrabegqjp7lZqRv2Y1dmnkVxY9PpMyYXA +TJmgQvYlkePLStwK4ScNlIIEIxPDP3QRTMLFoMO9/fFsfunUg30KxSQ+yMedYdsyFCQJ5RnjdxUL P4XPLDiHJkVXJbMhyBDOr5Mu382CVG7Rmnz0WJYoI5PVOVvjb+wt2GqYZmTPso9tbf3ZOsaXN40yg wDYOn5hw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sCs-002uMs-4Y; Thu, 15 Jul 2021 03:38:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Zi Yan , Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 003/138] mm: Add folio_pgdat(), folio_zone() and folio_zonenum() Date: Thu, 15 Jul 2021 04:34:49 +0100 Message-Id: <20210715033704.692967-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=A00dpqNe; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: mwchbok7nrb9owy1xywnaon6tyjekzoy X-Rspamd-Queue-Id: D7101300010A X-HE-Tag: 1626320396-575469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are just convenience wrappers for callers with folios; pgdat and zone can be reached from tail pages as well as head pages. No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/mm.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5071084a71b9..0e14ac29a3e9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1144,6 +1144,11 @@ static inline enum zone_type page_zonenum(const struct page *page) return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK; } +static inline enum zone_type folio_zonenum(const struct folio *folio) +{ + return page_zonenum(&folio->page); +} + #ifdef CONFIG_ZONE_DEVICE static inline bool is_zone_device_page(const struct page *page) { @@ -1559,6 +1564,16 @@ static inline pg_data_t *page_pgdat(const struct page *page) return NODE_DATA(page_to_nid(page)); } +static inline struct zone *folio_zone(const struct folio *folio) +{ + return page_zone(&folio->page); +} + +static inline pg_data_t *folio_pgdat(const struct folio *folio) +{ + return page_pgdat(&folio->page); +} + #ifdef SECTION_IN_PAGE_FLAGS static inline void set_page_section(struct page *page, unsigned long section) { From patchwork Thu Jul 15 03:34:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C79C7C1B08C for ; Thu, 15 Jul 2021 03:40:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2CE0E61377 for ; Thu, 15 Jul 2021 03:40:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CE0E61377 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 75D0A8D0012; Wed, 14 Jul 2021 23:40:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 733CA8D000A; Wed, 14 Jul 2021 23:40:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D5AA8D0012; Wed, 14 Jul 2021 23:40:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 3BD828D000A for ; Wed, 14 Jul 2021 23:40:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2091823E4C for ; Thu, 15 Jul 2021 03:40:26 +0000 (UTC) X-FDA: 78363419652.04.A13E454 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id D22F9F0000A1 for ; Thu, 15 Jul 2021 03:40:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oho32fb4EndS/uwM41GnXhOxiT41S3Jw0vCTyJbW2Cc=; b=CME7NHTE+w0VifnL5QGvHSaHEf UgQNPt87Sl1QMdUZ9FIp+stZ92TfEYKAlZHL7smNvZKvzvb3KbPi+5HTDH+QJ1D6P/I1qXGhQr0sC v7DqQ8Baypr57AExtwzlQJiegeA5vgH6yA5ds4gbWe6/Kb1K0HHhHXzeni2WsTDyJO9GQWkkGYuSq V4JoM/415oxg7gkpnhuo9o6PvAYUdYV4fpNc1CUG2y1ngFiGH1YHzoMO8pfwlUFfu2DFLxnBbtLiT bBlMHF7gE4QiMhwq9PgpuUJMSOi1SrK28oaHsFg4Kfod41+xD5qZhZx3K/KHtXd0sTlLD/ZsQKPRo AacSoi8Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sDS-002uOC-17; Thu, 15 Jul 2021 03:39:26 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 004/138] mm/vmstat: Add functions to account folio statistics Date: Thu, 15 Jul 2021 04:34:50 +0100 Message-Id: <20210715033704.692967-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CME7NHTE; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: oqwr5zhnhxio3891rp48pbuhmsnxp4it X-Rspamd-Queue-Id: D22F9F0000A1 X-HE-Tag: 1626320424-878857 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow page counters to be more readily modified by callers which have a folio. Name these wrappers with 'stat' instead of 'state' as requested by Linus here: https://lore.kernel.org/linux-mm/CAHk-=wj847SudR-kt+46fT3+xFFgiwpgThvm7DJWGdi4cVrbnQ@mail.gmail.com/ No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/vmstat.h | 107 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index d6a6cf53b127..241bd0f53fb9 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -415,6 +415,78 @@ static inline void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) { } #endif /* CONFIG_SMP */ +static inline void __zone_stat_mod_folio(struct folio *folio, + enum zone_stat_item item, long nr) +{ + __mod_zone_page_state(folio_zone(folio), item, nr); +} + +static inline void __zone_stat_add_folio(struct folio *folio, + enum zone_stat_item item) +{ + __mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio)); +} + +static inline void __zone_stat_sub_folio(struct folio *folio, + enum zone_stat_item item) +{ + __mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio)); +} + +static inline void zone_stat_mod_folio(struct folio *folio, + enum zone_stat_item item, long nr) +{ + mod_zone_page_state(folio_zone(folio), item, nr); +} + +static inline void zone_stat_add_folio(struct folio *folio, + enum zone_stat_item item) +{ + mod_zone_page_state(folio_zone(folio), item, folio_nr_pages(folio)); +} + +static inline void zone_stat_sub_folio(struct folio *folio, + enum zone_stat_item item) +{ + mod_zone_page_state(folio_zone(folio), item, -folio_nr_pages(folio)); +} + +static inline void __node_stat_mod_folio(struct folio *folio, + enum node_stat_item item, long nr) +{ + __mod_node_page_state(folio_pgdat(folio), item, nr); +} + +static inline void __node_stat_add_folio(struct folio *folio, + enum node_stat_item item) +{ + __mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio)); +} + +static inline void __node_stat_sub_folio(struct folio *folio, + enum node_stat_item item) +{ + __mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); +} + +static inline void node_stat_mod_folio(struct folio *folio, + enum node_stat_item item, long nr) +{ + mod_node_page_state(folio_pgdat(folio), item, nr); +} + +static inline void node_stat_add_folio(struct folio *folio, + enum node_stat_item item) +{ + mod_node_page_state(folio_pgdat(folio), item, folio_nr_pages(folio)); +} + +static inline void node_stat_sub_folio(struct folio *folio, + enum node_stat_item item) +{ + mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); +} + static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages, int migratetype) { @@ -543,6 +615,24 @@ static inline void __dec_lruvec_page_state(struct page *page, __mod_lruvec_page_state(page, idx, -1); } +static inline void __lruvec_stat_mod_folio(struct folio *folio, + enum node_stat_item idx, int val) +{ + __mod_lruvec_page_state(&folio->page, idx, val); +} + +static inline void __lruvec_stat_add_folio(struct folio *folio, + enum node_stat_item idx) +{ + __lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio)); +} + +static inline void __lruvec_stat_sub_folio(struct folio *folio, + enum node_stat_item idx) +{ + __lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); +} + static inline void inc_lruvec_page_state(struct page *page, enum node_stat_item idx) { @@ -555,4 +645,21 @@ static inline void dec_lruvec_page_state(struct page *page, mod_lruvec_page_state(page, idx, -1); } +static inline void lruvec_stat_mod_folio(struct folio *folio, + enum node_stat_item idx, int val) +{ + mod_lruvec_page_state(&folio->page, idx, val); +} + +static inline void lruvec_stat_add_folio(struct folio *folio, + enum node_stat_item idx) +{ + lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio)); +} + +static inline void lruvec_stat_sub_folio(struct folio *folio, + enum node_stat_item idx) +{ + lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); +} #endif /* _LINUX_VMSTAT_H */ From patchwork Thu Jul 15 03:34:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 540C9C47E48 for ; Thu, 15 Jul 2021 03:41:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF13761260 for ; Thu, 15 Jul 2021 03:41:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF13761260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 400398D0013; Wed, 14 Jul 2021 23:41:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D72A8D000A; Wed, 14 Jul 2021 23:41:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29FF78D0013; Wed, 14 Jul 2021 23:41:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 080CC8D000A for ; Wed, 14 Jul 2021 23:41:27 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F0F492203F for ; Thu, 15 Jul 2021 03:41:26 +0000 (UTC) X-FDA: 78363422172.18.DF8940C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 76593801CFB1 for ; Thu, 15 Jul 2021 03:41:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eRg3ak94wgijDr1AzRTG+e7qA9plHYcT37M29iDpI7Y=; b=JdJli0djSw3twbm+tvZ4QhRugA qTN4K6dB2kIxKcQ+rQgXsf2KG639EkaPaKjJJ2ZyzBV44WP45j81F3/7Ssz0qdYo082zkusiFuCbQ D4/YPHu6EBbNPiWeuxkQhNXSkj0XV1P4glJ1P8EmjKgsPqD0dyLH3Mpxq3hnywQZilfq2cUwvNmJf D5I/+wa5TtuogxtKkS8KU5Uep0ArJRq10tDHjb1kX6YlA/TQ7PODzlbmExdDh9TAefeReUcxaefbz 2Zc8820qsSkuG4AiIWi2scpL7z/mWa2VHgRSnPlcDnnlfydwYZ0m7d1Ab+jSv2XXCUcC0cnf2iHSu AoxMZIig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sE7-002uRe-Nc; Thu, 15 Jul 2021 03:40:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Zi Yan , Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 005/138] mm/debug: Add VM_BUG_ON_FOLIO() and VM_WARN_ON_ONCE_FOLIO() Date: Thu, 15 Jul 2021 04:34:51 +0100 Message-Id: <20210715033704.692967-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JdJli0dj; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ahc914dy1mrgotahw7tn8dnzpwtx6o6e X-Rspamd-Queue-Id: 76593801CFB1 X-HE-Tag: 1626320486-20228 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of VM_BUG_ON_PAGE and VM_WARN_ON_ONCE_PAGE. No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/mmdebug.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 1935d4c72d10..d7285f8148a3 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -22,6 +22,13 @@ void dump_mm(const struct mm_struct *mm); BUG(); \ } \ } while (0) +#define VM_BUG_ON_FOLIO(cond, folio) \ + do { \ + if (unlikely(cond)) { \ + dump_page(&folio->page, "VM_BUG_ON_FOLIO(" __stringify(cond)")");\ + BUG(); \ + } \ + } while (0) #define VM_BUG_ON_VMA(cond, vma) \ do { \ if (unlikely(cond)) { \ @@ -47,6 +54,17 @@ void dump_mm(const struct mm_struct *mm); } \ unlikely(__ret_warn_once); \ }) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \ + static bool __section(".data.once") __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(&folio->page, "VM_WARN_ON_ONCE_FOLIO(" __stringify(cond)")");\ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) @@ -55,11 +73,13 @@ void dump_mm(const struct mm_struct *mm); #else #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond) +#define VM_BUG_ON_FOLIO(cond, folio) VM_BUG_ON(cond) #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond) #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif From patchwork Thu Jul 15 03:34:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19EEDC07E96 for ; Thu, 15 Jul 2021 03:42:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD6BC61260 for ; Thu, 15 Jul 2021 03:42:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD6BC61260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 203108D0014; Wed, 14 Jul 2021 23:42:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B36A8D000A; Wed, 14 Jul 2021 23:42:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 006398D0014; Wed, 14 Jul 2021 23:42:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id D3B2A8D000A for ; Wed, 14 Jul 2021 23:42:16 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6F06184B4B07 for ; Thu, 15 Jul 2021 03:42:15 +0000 (UTC) X-FDA: 78363424230.19.32896E4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 4319150000A6 for ; Thu, 15 Jul 2021 03:42:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8F6CUdBKfSy3DC7yh+T8CRikfk42ztoDDdMP+r7QcNg=; b=MSqcchLwJphuOw7hD4UqaTqokT xa5l+ANiTEuJsiQwdBxUpSLTJbbhDHU8K8X08TKIRcyxhjYpYCh9yy5CUVenpvBYu8v1cuH+DGjz+ IuMqmbwMo3yRPQPT1jLZQo2Z8OSReqyYOT8m8v5IhbyB9zUpUH3UPh1kL0b6zI7v8IjrN+V65XM0G clHjQv+Bpot1XD47ph6De4CB2DdIpoVNmg1SJa8fYy7aGbrEtDd/VrihaGSbZKNotlzAfKXxge+MR 3hl6j4E+Ec7A2PI72vFRegfbDnkWNNVaCQjt17vKNVobPzzf9m+YGA+0/bfxtnm+dBEX/njn1V4su pTBE65RA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sEl-002uUQ-0T; Thu, 15 Jul 2021 03:40:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 006/138] mm: Add folio reference count functions Date: Thu, 15 Jul 2021 04:34:52 +0100 Message-Id: <20210715033704.692967-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MSqcchLw; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: rtxe9h688cf8xqzyrzcbhht81yybcx9g X-Rspamd-Queue-Id: 4319150000A6 X-HE-Tag: 1626320535-523338 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions mirror their page reference counterparts. Also add the kernel-doc to the mm-api and correct the return type of page_ref_add_unless() to bool. No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- Documentation/core-api/mm-api.rst | 1 + include/linux/page_ref.h | 88 ++++++++++++++++++++++++++++++- 2 files changed, 88 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 2a94e6164f80..5c459ee2acce 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -98,4 +98,5 @@ More Memory Management Functions .. kernel-doc:: include/linux/page-flags.h .. kernel-doc:: include/linux/mm.h :internal: +.. kernel-doc:: include/linux/page_ref.h .. kernel-doc:: include/linux/mmzone.h diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 3a799de8ad52..717d53c9ddf1 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -67,9 +67,31 @@ static inline int page_ref_count(const struct page *page) return atomic_read(&page->_refcount); } +/** + * folio_ref_count - The reference count on this folio. + * @folio: The folio. + * + * The refcount is usually incremented by calls to folio_get() and + * decremented by calls to folio_put(). Some typical users of the + * folio refcount: + * + * - Each reference from a page table + * - The page cache + * - Filesystem private data + * - The LRU list + * - Pipes + * - Direct IO which references this page in the process address space + * + * Return: The number of references to this folio. + */ +static inline int folio_ref_count(const struct folio *folio) +{ + return page_ref_count(&folio->page); +} + static inline int page_count(const struct page *page) { - return atomic_read(&compound_head(page)->_refcount); + return folio_ref_count(page_folio(page)); } static inline void set_page_count(struct page *page, int v) @@ -79,6 +101,11 @@ static inline void set_page_count(struct page *page, int v) __page_ref_set(page, v); } +static inline void folio_set_count(struct folio *folio, int v) +{ + set_page_count(&folio->page, v); +} + /* * Setup the page count before being freed into the page allocator for * the first time (boot or memory hotplug) @@ -95,6 +122,11 @@ static inline void page_ref_add(struct page *page, int nr) __page_ref_mod(page, nr); } +static inline void folio_ref_add(struct folio *folio, int nr) +{ + page_ref_add(&folio->page, nr); +} + static inline void page_ref_sub(struct page *page, int nr) { atomic_sub(nr, &page->_refcount); @@ -102,6 +134,11 @@ static inline void page_ref_sub(struct page *page, int nr) __page_ref_mod(page, -nr); } +static inline void folio_ref_sub(struct folio *folio, int nr) +{ + page_ref_sub(&folio->page, nr); +} + static inline int page_ref_sub_return(struct page *page, int nr) { int ret = atomic_sub_return(nr, &page->_refcount); @@ -111,6 +148,11 @@ static inline int page_ref_sub_return(struct page *page, int nr) return ret; } +static inline int folio_ref_sub_return(struct folio *folio, int nr) +{ + return page_ref_sub_return(&folio->page, nr); +} + static inline void page_ref_inc(struct page *page) { atomic_inc(&page->_refcount); @@ -118,6 +160,11 @@ static inline void page_ref_inc(struct page *page) __page_ref_mod(page, 1); } +static inline void folio_ref_inc(struct folio *folio) +{ + page_ref_inc(&folio->page); +} + static inline void page_ref_dec(struct page *page) { atomic_dec(&page->_refcount); @@ -125,6 +172,11 @@ static inline void page_ref_dec(struct page *page) __page_ref_mod(page, -1); } +static inline void folio_ref_dec(struct folio *folio) +{ + page_ref_dec(&folio->page); +} + static inline int page_ref_sub_and_test(struct page *page, int nr) { int ret = atomic_sub_and_test(nr, &page->_refcount); @@ -134,6 +186,11 @@ static inline int page_ref_sub_and_test(struct page *page, int nr) return ret; } +static inline int folio_ref_sub_and_test(struct folio *folio, int nr) +{ + return page_ref_sub_and_test(&folio->page, nr); +} + static inline int page_ref_inc_return(struct page *page) { int ret = atomic_inc_return(&page->_refcount); @@ -143,6 +200,11 @@ static inline int page_ref_inc_return(struct page *page) return ret; } +static inline int folio_ref_inc_return(struct folio *folio) +{ + return page_ref_inc_return(&folio->page); +} + static inline int page_ref_dec_and_test(struct page *page) { int ret = atomic_dec_and_test(&page->_refcount); @@ -152,6 +214,11 @@ static inline int page_ref_dec_and_test(struct page *page) return ret; } +static inline int folio_ref_dec_and_test(struct folio *folio) +{ + return page_ref_dec_and_test(&folio->page); +} + static inline int page_ref_dec_return(struct page *page) { int ret = atomic_dec_return(&page->_refcount); @@ -161,6 +228,11 @@ static inline int page_ref_dec_return(struct page *page) return ret; } +static inline int folio_ref_dec_return(struct folio *folio) +{ + return page_ref_dec_return(&folio->page); +} + static inline bool page_ref_add_unless(struct page *page, int nr, int u) { bool ret = atomic_add_unless(&page->_refcount, nr, u); @@ -170,6 +242,11 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u) return ret; } +static inline bool folio_ref_add_unless(struct folio *folio, int nr, int u) +{ + return page_ref_add_unless(&folio->page, nr, u); +} + static inline int page_ref_freeze(struct page *page, int count) { int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); @@ -179,6 +256,11 @@ static inline int page_ref_freeze(struct page *page, int count) return ret; } +static inline int folio_ref_freeze(struct folio *folio, int count) +{ + return page_ref_freeze(&folio->page, count); +} + static inline void page_ref_unfreeze(struct page *page, int count) { VM_BUG_ON_PAGE(page_count(page) != 0, page); @@ -189,4 +271,8 @@ static inline void page_ref_unfreeze(struct page *page, int count) __page_ref_unfreeze(page, count); } +static inline void folio_ref_unfreeze(struct folio *folio, int count) +{ + page_ref_unfreeze(&folio->page, count); +} #endif From patchwork Thu Jul 15 03:34:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3076FC07E96 for ; Thu, 15 Jul 2021 03:42:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD5ED61260 for ; Thu, 15 Jul 2021 03:42:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD5ED61260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2781D8D0015; Wed, 14 Jul 2021 23:42:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24E7F8D000A; Wed, 14 Jul 2021 23:42:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13DE48D0015; Wed, 14 Jul 2021 23:42:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id DC9C78D000A for ; Wed, 14 Jul 2021 23:42:54 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D9A3D184B4B07 for ; Thu, 15 Jul 2021 03:42:53 +0000 (UTC) X-FDA: 78363425826.18.E0F3CBD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 9A27310072E1 for ; Thu, 15 Jul 2021 03:42:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ey6KSRO3qBsZEkH3tuZmHciH3JUacugV4XbLHnLKkOE=; b=mDPsWFMULBLy1Oe9cIGZaMCrZk aQdvekha8UMQPzqfF0UtXngByf454kK9HzbAOWRxknyGpGORjJXc/+dm+PcqjCk6TapN9vohOr6fL 07gqlYEPOehQ8OSAmb1p0HtmYw6QQP/BQOIHOyNDiei9h4LvBWRMhCoijWF9gey9gn8C5xFPwxgk8 mss2Z26bvhqT9bnd5VEMLaYoFK8hksWTGQGIqGCFDXrkVmABkGmW+AKRUJ1InIVUjTme93GqSJAhh Wn95sgWqz36Uj6TGKhvp/0ezpWytP86Mu4C9pFdVp8GzCE96JyKWySkkD64l/1XJEQsdTt+mOKemY puApjgfQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sFd-002uWo-MA; Thu, 15 Jul 2021 03:41:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Zi Yan , Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 007/138] mm: Add folio_put() Date: Thu, 15 Jul 2021 04:34:53 +0100 Message-Id: <20210715033704.692967-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=mDPsWFMU; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 9c1nry61y9h6jq5k5qcju6wzwkophuuh X-Rspamd-Queue-Id: 9A27310072E1 X-Rspamd-Server: rspam01 X-HE-Tag: 1626320573-241714 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call folio_put() instead of put_page() and save the overhead of calling compound_head(). Also skips the devmap checks. This commit looks like it should be a no-op, but actually saves 684 bytes of text with the distro-derived config that I'm testing. Some functions grow a little while others shrink. I presume the compiler is making different inlining decisions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/mm.h | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e14ac29a3e9..9eca5da04dec 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -749,6 +749,11 @@ static inline int put_page_testzero(struct page *page) return page_ref_dec_and_test(page); } +static inline int folio_put_testzero(struct folio *folio) +{ + return put_page_testzero(&folio->page); +} + /* * Try to grab a ref unless the page has a refcount of zero, return false if * that is the case. @@ -1246,9 +1251,28 @@ static inline __must_check bool try_get_page(struct page *page) return true; } +/** + * folio_put - Decrement the reference count on a folio. + * @folio: The folio. + * + * If the folio's reference count reaches zero, the memory will be + * released back to the page allocator and may be used by another + * allocation immediately. Do not access the memory or the struct folio + * after calling folio_put() unless you can be sure that it wasn't the + * last reference. + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void folio_put(struct folio *folio) +{ + if (folio_put_testzero(folio)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { - page = compound_head(page); + struct folio *folio = page_folio(page); /* * For devmap managed pages we need to catch refcount transition from @@ -1256,13 +1280,12 @@ static inline void put_page(struct page *page) * need to inform the device driver through callback. See * include/linux/memremap.h and HMM for details. */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); + if (page_is_devmap_managed(&folio->page)) { + put_devmap_managed_page(&folio->page); return; } - if (put_page_testzero(page)) - __put_page(page); + folio_put(folio); } /* From patchwork Thu Jul 15 03:34:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FABAC07E96 for ; Thu, 15 Jul 2021 03:43:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4A4FF61396 for ; Thu, 15 Jul 2021 03:43:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A4FF61396 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A6948D0016; Wed, 14 Jul 2021 23:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 756EF8D000A; Wed, 14 Jul 2021 23:43:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D1318D0016; Wed, 14 Jul 2021 23:43:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 2D98C8D000A for ; Wed, 14 Jul 2021 23:43:49 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1B91D18097 for ; Thu, 15 Jul 2021 03:43:48 +0000 (UTC) X-FDA: 78363428136.19.4AE6136 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id D298C60019BE for ; Thu, 15 Jul 2021 03:43:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xNlcmWioaEpMTq7wyjGafnxE/o4LDRutv2Vlte/uvYQ=; b=R9D8WQ7oQC/xlLCcUsR8xwnl7L yWyQnn6HYAlUgVvXzVwt28RIvBcbZe7OKy8sOe+mZaxSluNIvBqqlxQ4pzmaLImurUn+tHmtexqX9 gtwTZxVWKs990MuiO4w1jWVhn23zX7y4oKap0bvzTJTYjtqtO92TlCuGUvUorIFDedgpo7S/CMGhv 9OrNy8/AzTMGRfNrhTWtHOHuo0GMEUFbXQf8s4HyW4eQFqiWQq3u8FMkd+OMuFCoF+uRUNsid3L76 25dGUpPP4l5kFtj//mSCNFuIRgXI98W7xfrWKCyj2IMbyn9itjHr+TB56FZ4GEuY43l+F5kRUcaVZ OQF08s2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sGU-002ua7-1A; Thu, 15 Jul 2021 03:42:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Zi Yan , Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 008/138] mm: Add folio_get() Date: Thu, 15 Jul 2021 04:34:54 +0100 Message-Id: <20210715033704.692967-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D298C60019BE X-Stat-Signature: qn1xq6zxiq7bt5jgsc3zrhp17u9dpati Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=R9D8WQ7o; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626320627-91728 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we know we have a folio, we can call folio_get() instead of get_page() and save the overhead of calling compound_head(). No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Zi Yan Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/mm.h | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9eca5da04dec..788fbc4cde0c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1223,18 +1223,26 @@ static inline bool is_pci_p2pdma_page(const struct page *page) } /* 127: arbitrary random number, small enough to assemble well */ -#define page_ref_zero_or_close_to_overflow(page) \ - ((unsigned int) page_ref_count(page) + 127u <= 127u) +#define folio_ref_zero_or_close_to_overflow(folio) \ + ((unsigned int) folio_ref_count(folio) + 127u <= 127u) + +/** + * folio_get - Increment the reference count on a folio. + * @folio: The folio. + * + * Context: May be called in any context, as long as you know that + * you have a refcount on the folio. If you do not already have one, + * folio_try_get() may be the right interface for you to use. + */ +static inline void folio_get(struct folio *folio) +{ + VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio); + folio_ref_inc(folio); +} static inline void get_page(struct page *page) { - page = compound_head(page); - /* - * Getting a normal page or the head of a compound page - * requires to already have an elevated page->_refcount. - */ - VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); - page_ref_inc(page); + folio_get(page_folio(page)); } bool __must_check try_grab_page(struct page *page, unsigned int flags); From patchwork Thu Jul 15 03:34:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C0AAC47E48 for ; Thu, 15 Jul 2021 03:44:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1477560FF4 for ; Thu, 15 Jul 2021 03:44:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1477560FF4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6461B8D0017; Wed, 14 Jul 2021 23:44:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F6B28D000A; Wed, 14 Jul 2021 23:44:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 449F98D0017; Wed, 14 Jul 2021 23:44:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 1D20E8D000A for ; Wed, 14 Jul 2021 23:44:34 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 04FEA231AA for ; Thu, 15 Jul 2021 03:44:33 +0000 (UTC) X-FDA: 78363430026.13.868562C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 2934D10000AA for ; Thu, 15 Jul 2021 03:44:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EmaH2tWGsnizSFCoaSWM7Fb22TiQ8PE1VmE4Fxj49Wg=; b=XO8woaGGrq8lLyNOv6hjTO70SJ HZQO+XFrg8twMuPxYvvsI+6Nrnbj2W44Z4lNWxAQJG2qk83PIb9l2gkcpOqHCNtPH8Mv8OJKBcm+7 rE/wpzZiWCapPORNWyV98NhTdDcNaXJmoo0g4fhK+hfFKZhtvGZ0L3UHzlkDtxin1zlPIDEFyHaxV uMH99Ve1pAt8aKKoH9XRiWCNSlMoPOqeB/x1n3lSdNcVWaKb6m7VOAySMjJX6dxk+1mtrVmSOqMWB hWgpayZF3XhQPlnqk51KuIPbKbY2Xi3DJPBav/ip9o56HtU2sfvpF+i60tqBKBRW+cLjPEG5Wb1wa xu7YimHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sHJ-002ucw-1a; Thu, 15 Jul 2021 03:43:28 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig , "Kirill A . Shutemov" Subject: [PATCH v14 009/138] mm: Add folio_try_get_rcu() Date: Thu, 15 Jul 2021 04:34:55 +0100 Message-Id: <20210715033704.692967-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XO8woaGG; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: w7zqayd8s7pgwm7ijk7kdktns8jrofhh X-Rspamd-Queue-Id: 2934D10000AA X-Rspamd-Server: rspam01 X-HE-Tag: 1626320672-679753 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the equivalent of page_cache_get_speculative(). Also add folio_ref_try_add_rcu (the equivalent of page_cache_add_speculative) and folio_get_unless_zero() (the equivalent of get_page_unless_zero()). The new kernel-doc attempts to explain from the user's point of view when to use folio_try_get_rcu() and when to use folio_get_unless_zero(), because there seems to be some confusion currently between the users of page_cache_get_speculative() and get_page_unless_zero(). Reimplement page_cache_add_speculative() and page_cache_get_speculative() as wrappers around the folio equivalents, but leave get_page_unless_zero() alone for now. This commit reduces text size by 3 bytes due to slightly different register allocation & instruction selections. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- include/linux/page_ref.h | 66 +++++++++++++++++++++++++++++++ include/linux/pagemap.h | 84 ++-------------------------------------- mm/filemap.c | 20 ++++++++++ 3 files changed, 90 insertions(+), 80 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 717d53c9ddf1..2e677e6ad09f 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -247,6 +247,72 @@ static inline bool folio_ref_add_unless(struct folio *folio, int nr, int u) return page_ref_add_unless(&folio->page, nr, u); } +/** + * folio_try_get - Attempt to increase the refcount on a folio. + * @folio: The folio. + * + * If you do not already have a reference to a folio, you can attempt to + * get one using this function. It may fail if, for example, the folio + * has been freed since you found a pointer to it, or it is frozen for + * the purposes of splitting or migration. + * + * Return: True if the reference count was successfully incremented. + */ +static inline bool folio_try_get(struct folio *folio) +{ + return folio_ref_add_unless(folio, 1, 0); +} + +static inline bool folio_ref_try_add_rcu(struct folio *folio, int count) +{ +#ifdef CONFIG_TINY_RCU + /* + * The caller guarantees the folio will not be freed from interrupt + * context, so (on !SMP) we only need preemption to be disabled + * and TINY_RCU does that for us. + */ +# ifdef CONFIG_PREEMPT_COUNT + VM_BUG_ON(!in_atomic() && !irqs_disabled()); +# endif + VM_BUG_ON_FOLIO(folio_ref_count(folio) == 0, folio); + folio_ref_add(folio, count); +#else + if (unlikely(!folio_ref_add_unless(folio, count, 0))) { + /* Either the folio has been freed, or will be freed. */ + return false; + } +#endif + return true; +} + +/** + * folio_try_get_rcu - Attempt to increase the refcount on a folio. + * @folio: The folio. + * + * This is a version of folio_try_get() optimised for non-SMP kernels. + * If you are still holding the rcu_read_lock() after looking up the + * page and know that the page cannot have its refcount decreased to + * zero in interrupt context, you can use this instead of folio_try_get(). + * + * Example users include get_user_pages_fast() (as pages are not unmapped + * from interrupt context) and the page cache lookups (as pages are not + * truncated from interrupt context). We also know that pages are not + * frozen in interrupt context for the purposes of splitting or migration. + * + * You can also use this function if you're holding a lock that prevents + * pages being frozen & removed; eg the i_pages lock for the page cache + * or the mmap_sem or page table lock for page tables. In this case, + * it will always succeed, and you could have used a plain folio_get(), + * but it's sometimes more convenient to have a common function called + * from both locked and RCU-protected contexts. + * + * Return: True if the reference count was successfully incremented. + */ +static inline bool folio_try_get_rcu(struct folio *folio) +{ + return folio_ref_try_add_rcu(folio, 1); +} + static inline int page_ref_freeze(struct page *page, int count) { int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index ed02aa522263..db1726b1bc1c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -172,91 +172,15 @@ static inline struct address_space *page_mapping_file(struct page *page) return page_mapping(page); } -/* - * speculatively take a reference to a page. - * If the page is free (_refcount == 0), then _refcount is untouched, and 0 - * is returned. Otherwise, _refcount is incremented by 1 and 1 is returned. - * - * This function must be called inside the same rcu_read_lock() section as has - * been used to lookup the page in the pagecache radix-tree (or page table): - * this allows allocators to use a synchronize_rcu() to stabilize _refcount. - * - * Unless an RCU grace period has passed, the count of all pages coming out - * of the allocator must be considered unstable. page_count may return higher - * than expected, and put_page must be able to do the right thing when the - * page has been finished with, no matter what it is subsequently allocated - * for (because put_page is what is used here to drop an invalid speculative - * reference). - * - * This is the interesting part of the lockless pagecache (and lockless - * get_user_pages) locking protocol, where the lookup-side (eg. find_get_page) - * has the following pattern: - * 1. find page in radix tree - * 2. conditionally increment refcount - * 3. check the page is still in pagecache (if no, goto 1) - * - * Remove-side that cares about stability of _refcount (eg. reclaim) has the - * following (with the i_pages lock held): - * A. atomically check refcount is correct and set it to 0 (atomic_cmpxchg) - * B. remove page from pagecache - * C. free the page - * - * There are 2 critical interleavings that matter: - * - 2 runs before A: in this case, A sees elevated refcount and bails out - * - A runs before 2: in this case, 2 sees zero refcount and retries; - * subsequently, B will complete and 1 will find no page, causing the - * lookup to return NULL. - * - * It is possible that between 1 and 2, the page is removed then the exact same - * page is inserted into the same position in pagecache. That's OK: the - * old find_get_page using a lock could equally have run before or after - * such a re-insertion, depending on order that locks are granted. - * - * Lookups racing against pagecache insertion isn't a big problem: either 1 - * will find the page or it will not. Likewise, the old find_get_page could run - * either before the insertion or afterwards, depending on timing. - */ -static inline int __page_cache_add_speculative(struct page *page, int count) +static inline bool page_cache_add_speculative(struct page *page, int count) { -#ifdef CONFIG_TINY_RCU -# ifdef CONFIG_PREEMPT_COUNT - VM_BUG_ON(!in_atomic() && !irqs_disabled()); -# endif - /* - * Preempt must be disabled here - we rely on rcu_read_lock doing - * this for us. - * - * Pagecache won't be truncated from interrupt context, so if we have - * found a page in the radix tree here, we have pinned its refcount by - * disabling preempt, and hence no need for the "speculative get" that - * SMP requires. - */ - VM_BUG_ON_PAGE(page_count(page) == 0, page); - page_ref_add(page, count); - -#else - if (unlikely(!page_ref_add_unless(page, count, 0))) { - /* - * Either the page has been freed, or will be freed. - * In either case, retry here and the caller should - * do the right thing (see comments above). - */ - return 0; - } -#endif VM_BUG_ON_PAGE(PageTail(page), page); - - return 1; -} - -static inline int page_cache_get_speculative(struct page *page) -{ - return __page_cache_add_speculative(page, 1); + return folio_ref_try_add_rcu((struct folio *)page, count); } -static inline int page_cache_add_speculative(struct page *page, int count) +static inline bool page_cache_get_speculative(struct page *page) { - return __page_cache_add_speculative(page, count); + return page_cache_add_speculative(page, 1); } /** diff --git a/mm/filemap.c b/mm/filemap.c index d1458ecf2f51..634adeacc4c1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1746,6 +1746,26 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping, } EXPORT_SYMBOL(page_cache_prev_miss); +/* + * Lockless page cache protocol: + * On the lookup side: + * 1. Load the folio from i_pages + * 2. Increment the refcount if it's not zero + * 3. If the folio is not found by xas_reload(), put the refcount and retry + * + * On the removal side: + * A. Freeze the page (by zeroing the refcount if nobody else has a reference) + * B. Remove the page from i_pages + * C. Return the page to the page allocator + * + * This means that any page may have its reference count temporarily + * increased by a speculative page cache (or fast GUP) lookup as it can + * be allocated by another user before the RCU grace period expires. + * Because the refcount temporarily acquired here may end up being the + * last refcount on the page, any page allocation must be freeable by + * put_folio(). + */ + /* * mapping_get_entry - Get a page cache entry. * @mapping: the address_space to search From patchwork Thu Jul 15 03:34:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 579A7C07E96 for ; Thu, 15 Jul 2021 03:45:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D936361166 for ; Thu, 15 Jul 2021 03:45:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D936361166 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C2558D0018; Wed, 14 Jul 2021 23:45:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 299948D000A; Wed, 14 Jul 2021 23:45:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13A2B8D0018; Wed, 14 Jul 2021 23:45:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id DFE2F8D000A for ; Wed, 14 Jul 2021 23:45:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D181A23E48 for ; Thu, 15 Jul 2021 03:45:33 +0000 (UTC) X-FDA: 78363432546.02.83640EE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 0BD79D0000B1 for ; Thu, 15 Jul 2021 03:45:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=95WEAcq4+KHxCDU8NrQdAoNPiC3mFXf2WQoVtLZScH0=; b=nWeGgfgoqXjnGR439wFazelADy pGa98N7slLatTuxX2icKXyoLWNxi9tmdfzPO4Jfi94GGv8nvBr1bmY6peGZcuXp/tHf3h8P4GrTkY YeN+Jtr1HfU3mOH8vY/N4Hr2LDm64ljyZrXv8icKTpfVwGeqjWia2M6qzGA1hS+QBPQC/bU2Km3aq GXT8p/dACtvgcNiIlbDdCCIqbjXubPzhCTt2pW4hTBbsH6BofVHicoJ/WsvyNEm0EwNnTptKuAoxi xKdQN6xLX3kL6A6jizvObb1fdjzG6qDtGl+zuHMMtXMp3ogeM3GgfvjzsHtrLEWzuaXD8hQeLf6LR EedZknmA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sHq-002uet-Nz; Thu, 15 Jul 2021 03:43:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 010/138] mm: Add folio flag manipulation functions Date: Thu, 15 Jul 2021 04:34:56 +0100 Message-Id: <20210715033704.692967-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nWeGgfgo; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: sx3exb8k6dqnij1ed5zpgwyxy8h8rot5 X-Rspamd-Queue-Id: 0BD79D0000B1 X-Rspamd-Server: rspam01 X-HE-Tag: 1626320732-411959 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These new functions are the folio analogues of the various PageFlags functions. If CONFIG_DEBUG_VM_PGFLAGS is enabled, we check the folio is not a tail page at every invocation. This will also catch the PagePoisoned case as a poisoned page has every bit set, which would include PageTail. This saves 1684 bytes of text with the distro-derived config that I'm testing due to removing a double call to compound_head() in PageSwapCache(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/page-flags.h | 219 ++++++++++++++++++++++++++----------- 1 file changed, 156 insertions(+), 63 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 70ede8345538..ddb660688086 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -143,6 +143,8 @@ enum pageflags { #endif __NR_PAGEFLAGS, + PG_readahead = PG_reclaim, + /* Filesystems */ PG_checked = PG_owner_priv_1, @@ -243,6 +245,15 @@ static inline void page_init_poison(struct page *page, size_t size) } #endif +static unsigned long *folio_flags(struct folio *folio, unsigned n) +{ + struct page *page = &folio->page; + + VM_BUG_ON_PGFLAGS(PageTail(page), page); + VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page); + return &page[n].flags; +} + /* * Page flags policies wrt compound pages * @@ -287,36 +298,64 @@ static inline void page_init_poison(struct page *page, size_t size) VM_BUG_ON_PGFLAGS(!PageHead(page), page); \ PF_POISONED_CHECK(&page[1]); }) +/* Which page is the flag stored in */ +#define FOLIO_PF_ANY 0 +#define FOLIO_PF_HEAD 0 +#define FOLIO_PF_ONLY_HEAD 0 +#define FOLIO_PF_NO_TAIL 0 +#define FOLIO_PF_NO_COMPOUND 0 +#define FOLIO_PF_SECOND 1 + /* * Macros to create function definitions for page flags */ #define TESTPAGEFLAG(uname, lname, policy) \ +static __always_inline bool folio_test_##lname(struct folio *folio) \ +{ return test_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int Page##uname(struct page *page) \ - { return test_bit(PG_##lname, &policy(page, 0)->flags); } +{ return test_bit(PG_##lname, &policy(page, 0)->flags); } #define SETPAGEFLAG(uname, lname, policy) \ +static __always_inline \ +void folio_set_##lname(struct folio *folio) \ +{ set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void SetPage##uname(struct page *page) \ - { set_bit(PG_##lname, &policy(page, 1)->flags); } +{ set_bit(PG_##lname, &policy(page, 1)->flags); } #define CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline \ +void folio_clear_##lname(struct folio *folio) \ +{ clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void ClearPage##uname(struct page *page) \ - { clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ clear_bit(PG_##lname, &policy(page, 1)->flags); } #define __SETPAGEFLAG(uname, lname, policy) \ +static __always_inline \ +void __folio_set_##lname(struct folio *folio) \ +{ __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void __SetPage##uname(struct page *page) \ - { __set_bit(PG_##lname, &policy(page, 1)->flags); } +{ __set_bit(PG_##lname, &policy(page, 1)->flags); } #define __CLEARPAGEFLAG(uname, lname, policy) \ +static __always_inline \ +void __folio_clear_##lname(struct folio *folio) \ +{ __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline void __ClearPage##uname(struct page *page) \ - { __clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTSETFLAG(uname, lname, policy) \ +static __always_inline \ +bool folio_test_set_##lname(struct folio *folio) \ +{ return test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int TestSetPage##uname(struct page *page) \ - { return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } +{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } #define TESTCLEARFLAG(uname, lname, policy) \ +static __always_inline \ +bool folio_test_clear_##lname(struct folio *folio) \ +{ return test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ static __always_inline int TestClearPage##uname(struct page *page) \ - { return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } #define PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ @@ -332,29 +371,37 @@ static __always_inline int TestClearPage##uname(struct page *page) \ TESTSETFLAG(uname, lname, policy) \ TESTCLEARFLAG(uname, lname, policy) -#define TESTPAGEFLAG_FALSE(uname) \ +#define TESTPAGEFLAG_FALSE(uname, lname) \ +static inline bool folio_test_##lname(const struct folio *folio) { return 0; } \ static inline int Page##uname(const struct page *page) { return 0; } -#define SETPAGEFLAG_NOOP(uname) \ +#define SETPAGEFLAG_NOOP(uname, lname) \ +static inline void folio_set_##lname(struct folio *folio) { } \ static inline void SetPage##uname(struct page *page) { } -#define CLEARPAGEFLAG_NOOP(uname) \ +#define CLEARPAGEFLAG_NOOP(uname, lname) \ +static inline void folio_clear_##lname(struct folio *folio) { } \ static inline void ClearPage##uname(struct page *page) { } -#define __CLEARPAGEFLAG_NOOP(uname) \ +#define __CLEARPAGEFLAG_NOOP(uname, lname) \ +static inline void __folio_clear_##lname(struct folio *folio) { } \ static inline void __ClearPage##uname(struct page *page) { } -#define TESTSETFLAG_FALSE(uname) \ +#define TESTSETFLAG_FALSE(uname, lname) \ +static inline bool folio_test_set_##lname(struct folio *folio) \ +{ return 0; } \ static inline int TestSetPage##uname(struct page *page) { return 0; } -#define TESTCLEARFLAG_FALSE(uname) \ +#define TESTCLEARFLAG_FALSE(uname, lname) \ +static inline bool folio_test_clear_##lname(struct folio *folio) \ +{ return 0; } \ static inline int TestClearPage##uname(struct page *page) { return 0; } -#define PAGEFLAG_FALSE(uname) TESTPAGEFLAG_FALSE(uname) \ - SETPAGEFLAG_NOOP(uname) CLEARPAGEFLAG_NOOP(uname) +#define PAGEFLAG_FALSE(uname, lname) TESTPAGEFLAG_FALSE(uname, lname) \ + SETPAGEFLAG_NOOP(uname, lname) CLEARPAGEFLAG_NOOP(uname, lname) -#define TESTSCFLAG_FALSE(uname) \ - TESTSETFLAG_FALSE(uname) TESTCLEARFLAG_FALSE(uname) +#define TESTSCFLAG_FALSE(uname, lname) \ + TESTSETFLAG_FALSE(uname, lname) TESTCLEARFLAG_FALSE(uname, lname) __PAGEFLAG(Locked, locked, PF_NO_TAIL) PAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) __CLEARPAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) @@ -410,8 +457,8 @@ PAGEFLAG(MappedToDisk, mappedtodisk, PF_NO_TAIL) /* PG_readahead is only used for reads; PG_reclaim is only for writes */ PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL) TESTCLEARFLAG(Reclaim, reclaim, PF_NO_TAIL) -PAGEFLAG(Readahead, reclaim, PF_NO_COMPOUND) - TESTCLEARFLAG(Readahead, reclaim, PF_NO_COMPOUND) +PAGEFLAG(Readahead, readahead, PF_NO_COMPOUND) + TESTCLEARFLAG(Readahead, readahead, PF_NO_COMPOUND) #ifdef CONFIG_HIGHMEM /* @@ -420,22 +467,25 @@ PAGEFLAG(Readahead, reclaim, PF_NO_COMPOUND) */ #define PageHighMem(__p) is_highmem_idx(page_zonenum(__p)) #else -PAGEFLAG_FALSE(HighMem) +PAGEFLAG_FALSE(HighMem, highmem) #endif #ifdef CONFIG_SWAP -static __always_inline int PageSwapCache(struct page *page) +static __always_inline bool folio_test_swapcache(struct folio *folio) { -#ifdef CONFIG_THP_SWAP - page = compound_head(page); -#endif - return PageSwapBacked(page) && test_bit(PG_swapcache, &page->flags); + return folio_test_swapbacked(folio) && + test_bit(PG_swapcache, folio_flags(folio, 0)); +} +static __always_inline bool PageSwapCache(struct page *page) +{ + return folio_test_swapcache(page_folio(page)); } + SETPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) CLEARPAGEFLAG(SwapCache, swapcache, PF_NO_TAIL) #else -PAGEFLAG_FALSE(SwapCache) +PAGEFLAG_FALSE(SwapCache, swapcache) #endif PAGEFLAG(Unevictable, unevictable, PF_HEAD) @@ -447,14 +497,14 @@ PAGEFLAG(Mlocked, mlocked, PF_NO_TAIL) __CLEARPAGEFLAG(Mlocked, mlocked, PF_NO_TAIL) TESTSCFLAG(Mlocked, mlocked, PF_NO_TAIL) #else -PAGEFLAG_FALSE(Mlocked) __CLEARPAGEFLAG_NOOP(Mlocked) - TESTSCFLAG_FALSE(Mlocked) +PAGEFLAG_FALSE(Mlocked, mlocked) __CLEARPAGEFLAG_NOOP(Mlocked, mlocked) + TESTSCFLAG_FALSE(Mlocked, mlocked) #endif #ifdef CONFIG_ARCH_USES_PG_UNCACHED PAGEFLAG(Uncached, uncached, PF_NO_COMPOUND) #else -PAGEFLAG_FALSE(Uncached) +PAGEFLAG_FALSE(Uncached, uncached) #endif #ifdef CONFIG_MEMORY_FAILURE @@ -463,7 +513,7 @@ TESTSCFLAG(HWPoison, hwpoison, PF_ANY) #define __PG_HWPOISON (1UL << PG_hwpoison) extern bool take_page_off_buddy(struct page *page); #else -PAGEFLAG_FALSE(HWPoison) +PAGEFLAG_FALSE(HWPoison, hwpoison) #define __PG_HWPOISON 0 #endif @@ -477,7 +527,7 @@ PAGEFLAG(Idle, idle, PF_ANY) #ifdef CONFIG_KASAN_HW_TAGS PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) #else -PAGEFLAG_FALSE(SkipKASanPoison) +PAGEFLAG_FALSE(SkipKASanPoison, skip_kasan_poison) #endif /* @@ -515,10 +565,14 @@ static __always_inline int PageMappingFlags(struct page *page) return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } -static __always_inline int PageAnon(struct page *page) +static __always_inline bool folio_test_anon(struct folio *folio) +{ + return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline bool PageAnon(struct page *page) { - page = compound_head(page); - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return folio_test_anon(page_folio(page)); } static __always_inline int __PageMovable(struct page *page) @@ -534,30 +588,32 @@ static __always_inline int __PageMovable(struct page *page) * is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any * anon_vma, but to that page's node of the stable tree. */ -static __always_inline int PageKsm(struct page *page) +static __always_inline bool folio_test_ksm(struct folio *folio) { - page = compound_head(page); - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == + return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_KSM; } + +static __always_inline bool PageKsm(struct page *page) +{ + return folio_test_ksm(page_folio(page)); +} #else -TESTPAGEFLAG_FALSE(Ksm) +TESTPAGEFLAG_FALSE(Ksm, ksm) #endif u64 stable_page_flags(struct page *page); -static inline int PageUptodate(struct page *page) +static inline bool folio_test_uptodate(struct folio *folio) { - int ret; - page = compound_head(page); - ret = test_bit(PG_uptodate, &(page)->flags); + bool ret = test_bit(PG_uptodate, folio_flags(folio, 0)); /* - * Must ensure that the data we read out of the page is loaded - * _after_ we've loaded page->flags to check for PageUptodate. - * We can skip the barrier if the page is not uptodate, because + * Must ensure that the data we read out of the folio is loaded + * _after_ we've loaded folio->flags to check the uptodate bit. + * We can skip the barrier if the folio is not uptodate, because * we wouldn't be reading anything from it. * - * See SetPageUptodate() for the other side of the story. + * See folio_mark_uptodate() for the other side of the story. */ if (ret) smp_rmb(); @@ -565,23 +621,36 @@ static inline int PageUptodate(struct page *page) return ret; } -static __always_inline void __SetPageUptodate(struct page *page) +static inline int PageUptodate(struct page *page) +{ + return folio_test_uptodate(page_folio(page)); +} + +static __always_inline void __folio_mark_uptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); smp_wmb(); - __set_bit(PG_uptodate, &page->flags); + __set_bit(PG_uptodate, folio_flags(folio, 0)); } -static __always_inline void SetPageUptodate(struct page *page) +static __always_inline void folio_mark_uptodate(struct folio *folio) { - VM_BUG_ON_PAGE(PageTail(page), page); /* * Memory barrier must be issued before setting the PG_uptodate bit, - * so that all previous stores issued in order to bring the page - * uptodate are actually visible before PageUptodate becomes true. + * so that all previous stores issued in order to bring the folio + * uptodate are actually visible before folio_test_uptodate becomes true. */ smp_wmb(); - set_bit(PG_uptodate, &page->flags); + set_bit(PG_uptodate, folio_flags(folio, 0)); +} + +static __always_inline void __SetPageUptodate(struct page *page) +{ + __folio_mark_uptodate((struct folio *)page); +} + +static __always_inline void SetPageUptodate(struct page *page) +{ + folio_mark_uptodate((struct folio *)page); } CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) @@ -606,6 +675,17 @@ static inline void set_page_writeback_keepwrite(struct page *page) __PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY) +/* Whether there are one or multiple pages in a folio */ +static inline bool folio_single(struct folio *folio) +{ + return !folio_test_head(folio); +} + +static inline bool folio_multi(struct folio *folio) +{ + return folio_test_head(folio); +} + static __always_inline void set_compound_head(struct page *page, struct page *head) { WRITE_ONCE(page->compound_head, (unsigned long)head + 1); @@ -629,12 +709,15 @@ static inline void ClearPageCompound(struct page *page) #ifdef CONFIG_HUGETLB_PAGE int PageHuge(struct page *page); int PageHeadHuge(struct page *page); +static inline bool folio_test_hugetlb(struct folio *folio) +{ + return PageHeadHuge(&folio->page); +} #else -TESTPAGEFLAG_FALSE(Huge) -TESTPAGEFLAG_FALSE(HeadHuge) +TESTPAGEFLAG_FALSE(Huge, hugetlb) +TESTPAGEFLAG_FALSE(HeadHuge, headhuge) #endif - #ifdef CONFIG_TRANSPARENT_HUGEPAGE /* * PageHuge() only returns true for hugetlbfs pages, but not for @@ -650,6 +733,11 @@ static inline int PageTransHuge(struct page *page) return PageHead(page); } +static inline bool folio_test_transhuge(struct folio *folio) +{ + return folio_test_head(folio); +} + /* * PageTransCompound returns true for both transparent huge pages * and hugetlbfs pages, so it should only be called when it's known @@ -723,12 +811,12 @@ static inline int PageTransTail(struct page *page) PAGEFLAG(DoubleMap, double_map, PF_SECOND) TESTSCFLAG(DoubleMap, double_map, PF_SECOND) #else -TESTPAGEFLAG_FALSE(TransHuge) -TESTPAGEFLAG_FALSE(TransCompound) -TESTPAGEFLAG_FALSE(TransCompoundMap) -TESTPAGEFLAG_FALSE(TransTail) -PAGEFLAG_FALSE(DoubleMap) - TESTSCFLAG_FALSE(DoubleMap) +TESTPAGEFLAG_FALSE(TransHuge, transhuge) +TESTPAGEFLAG_FALSE(TransCompound, transcompound) +TESTPAGEFLAG_FALSE(TransCompoundMap, transcompoundmap) +TESTPAGEFLAG_FALSE(TransTail, transtail) +PAGEFLAG_FALSE(DoubleMap, double_map) + TESTSCFLAG_FALSE(DoubleMap, double_map) #endif /* @@ -903,6 +991,11 @@ static inline int page_has_private(struct page *page) return !!(page->flags & PAGE_FLAGS_PRIVATE); } +static inline bool folio_has_private(struct folio *folio) +{ + return page_has_private(&folio->page); +} + #undef PF_ANY #undef PF_HEAD #undef PF_ONLY_HEAD From patchwork Thu Jul 15 03:34:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB154C07E96 for ; Thu, 15 Jul 2021 03:46:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 717BC6115A for ; Thu, 15 Jul 2021 03:46:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 717BC6115A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C36E48D0019; Wed, 14 Jul 2021 23:46:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE7068D000A; Wed, 14 Jul 2021 23:46:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A87EC8D0019; Wed, 14 Jul 2021 23:46:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 84B728D000A for ; Wed, 14 Jul 2021 23:46:30 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7160122C05 for ; Thu, 15 Jul 2021 03:46:29 +0000 (UTC) X-FDA: 78363434898.14.A1278E0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 13C3EE0016B0 for ; Thu, 15 Jul 2021 03:46:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YOwUI2qAcdszbkKuR5ckiu9qObMI6+1n7AV5Fnqf7Ss=; b=ajidZu6ue4C4aWSfTwccXKaMm6 EYd43L44kymkkrz0PQugRKDWMcs8VM9/AuIRUooy2HaM7n1XDztksFWNrIsJA8qnPtIRQTSYmeEI/ ftiTujTzvrFwBmtUONIjrk4ASJDitji7QbglQoEnbS6uh9wf5tLh6Wpzs7LB7xu+FUeCs752jlXUe 5jurBZ4PPdryy9rP8HewfS6i6WRaCfaeukZPx6YHoYKWatq2qjDk9XPuqneG0lqUZZa+N1FYPNGXb E5V11lVrhLRHWJkFCjl0SUR6ur+glMLM1v0hoFtUN3iPw2CwshLZGkuDDhYt8b2AfvKV+doV6Q4La X4xxoGQA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sIg-002uiH-Ox; Thu, 15 Jul 2021 03:45:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yu Zhao , Christoph Hellwig , David Howells , "Kirill A . Shutemov" Subject: [PATCH v14 011/138] mm/lru: Add folio LRU functions Date: Thu, 15 Jul 2021 04:34:57 +0100 Message-Id: <20210715033704.692967-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 13C3EE0016B0 X-Stat-Signature: 9xe65c9nxkk6xojz9n1y96gis5b3c4gi Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ajidZu6u; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626320788-393117 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Handle arbitrary-order folios being added to the LRU. By definition, all pages being added to the LRU were already head or base pages, but call page_folio() on them anyway to get the type right and avoid the buried calls to compound_head(). Saves 783 bytes of kernel text; no functions grow. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Yu Zhao Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Kirill A. Shutemov Acked-by: Mike Rapoport Signed-off-by: Mike Rapoport Acked-by: Vlastimil Babka --- include/linux/mm_inline.h | 98 ++++++++++++++++++++++------------ include/trace/events/pagemap.h | 2 +- 2 files changed, 65 insertions(+), 35 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..ee155d19885e 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -6,22 +6,27 @@ #include /** - * page_is_file_lru - should the page be on a file LRU or anon LRU? - * @page: the page to test + * folio_is_file_lru - should the folio be on a file LRU or anon LRU? + * @folio: the folio to test * - * Returns 1 if @page is a regular filesystem backed page cache page or a lazily - * freed anonymous page (e.g. via MADV_FREE). Returns 0 if @page is a normal - * anonymous page, a tmpfs page or otherwise ram or swap backed page. Used by - * functions that manipulate the LRU lists, to sort a page onto the right LRU - * list. + * Returns 1 if @folio is a regular filesystem backed page cache folio + * or a lazily freed anonymous folio (e.g. via MADV_FREE). Returns 0 if + * @folio is a normal anonymous folio, a tmpfs folio or otherwise ram or + * swap backed folio. Used by functions that manipulate the LRU lists, + * to sort a folio onto the right LRU list. * * We would like to get this info without a page flag, but the state - * needs to survive until the page is last deleted from the LRU, which + * needs to survive until the folio is last deleted from the LRU, which * could be as far down as __page_cache_release. */ +static inline int folio_is_file_lru(struct folio *folio) +{ + return !folio_test_swapbacked(folio); +} + static inline int page_is_file_lru(struct page *page) { - return !PageSwapBacked(page); + return folio_is_file_lru(page_folio(page)); } static __always_inline void update_lru_size(struct lruvec *lruvec, @@ -39,69 +44,94 @@ static __always_inline void update_lru_size(struct lruvec *lruvec, } /** - * __clear_page_lru_flags - clear page lru flags before releasing a page - * @page: the page that was on lru and now has a zero reference + * __folio_clear_lru_flags - clear page lru flags before releasing a page + * @folio: The folio that was on lru and now has a zero reference */ -static __always_inline void __clear_page_lru_flags(struct page *page) +static __always_inline void __folio_clear_lru_flags(struct folio *folio) { - VM_BUG_ON_PAGE(!PageLRU(page), page); + VM_BUG_ON_FOLIO(!folio_test_lru(folio), folio); - __ClearPageLRU(page); + __folio_clear_lru(folio); /* this shouldn't happen, so leave the flags to bad_page() */ - if (PageActive(page) && PageUnevictable(page)) + if (folio_test_active(folio) && folio_test_unevictable(folio)) return; - __ClearPageActive(page); - __ClearPageUnevictable(page); + __folio_clear_active(folio); + __folio_clear_unevictable(folio); +} + +static __always_inline void __clear_page_lru_flags(struct page *page) +{ + __folio_clear_lru_flags(page_folio(page)); } /** - * page_lru - which LRU list should a page be on? - * @page: the page to test + * folio_lru_list - which LRU list should a folio be on? + * @folio: the folio to test * - * Returns the LRU list a page should be on, as an index + * Returns the LRU list a folio should be on, as an index * into the array of LRU lists. */ -static __always_inline enum lru_list page_lru(struct page *page) +static __always_inline enum lru_list folio_lru_list(struct folio *folio) { enum lru_list lru; - VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); + VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio); - if (PageUnevictable(page)) + if (folio_test_unevictable(folio)) return LRU_UNEVICTABLE; - lru = page_is_file_lru(page) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; - if (PageActive(page)) + lru = folio_is_file_lru(folio) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; + if (folio_test_active(folio)) lru += LRU_ACTIVE; return lru; } +static __always_inline +void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) +{ + enum lru_list lru = folio_lru_list(folio); + + update_lru_size(lruvec, lru, folio_zonenum(folio), + folio_nr_pages(folio)); + list_add(&folio->lru, &lruvec->lists[lru]); +} + static __always_inline void add_page_to_lru_list(struct page *page, struct lruvec *lruvec) { - enum lru_list lru = page_lru(page); + lruvec_add_folio(lruvec, page_folio(page)); +} + +static __always_inline +void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) +{ + enum lru_list lru = folio_lru_list(folio); - update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); - list_add(&page->lru, &lruvec->lists[lru]); + update_lru_size(lruvec, lru, folio_zonenum(folio), + folio_nr_pages(folio)); + list_add_tail(&folio->lru, &lruvec->lists[lru]); } static __always_inline void add_page_to_lru_list_tail(struct page *page, struct lruvec *lruvec) { - enum lru_list lru = page_lru(page); + lruvec_add_folio_tail(lruvec, page_folio(page)); +} - update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); - list_add_tail(&page->lru, &lruvec->lists[lru]); +static __always_inline +void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) +{ + list_del(&folio->lru); + update_lru_size(lruvec, folio_lru_list(folio), folio_zonenum(folio), + -folio_nr_pages(folio)); } static __always_inline void del_page_from_lru_list(struct page *page, struct lruvec *lruvec) { - list_del(&page->lru); - update_lru_size(lruvec, page_lru(page), page_zonenum(page), - -thp_nr_pages(page)); + lruvec_del_folio(lruvec, page_folio(page)); } #endif diff --git a/include/trace/events/pagemap.h b/include/trace/events/pagemap.h index 1d28431e85bd..92ad176210ff 100644 --- a/include/trace/events/pagemap.h +++ b/include/trace/events/pagemap.h @@ -41,7 +41,7 @@ TRACE_EVENT(mm_lru_insertion, TP_fast_assign( __entry->page = page; __entry->pfn = page_to_pfn(page); - __entry->lru = page_lru(page); + __entry->lru = folio_lru_list(page_folio(page)); __entry->flags = trace_pagemap_flags(page); ), From patchwork Thu Jul 15 03:34:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70C5FC07E96 for ; Thu, 15 Jul 2021 03:46:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1B0FF6115A for ; Thu, 15 Jul 2021 03:46:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1B0FF6115A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7073A8D001A; Wed, 14 Jul 2021 23:46:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68F408D000A; Wed, 14 Jul 2021 23:46:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5311F8D001A; Wed, 14 Jul 2021 23:46:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 302DA8D000A for ; Wed, 14 Jul 2021 23:46:46 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 27FFC8248047 for ; Thu, 15 Jul 2021 03:46:45 +0000 (UTC) X-FDA: 78363435570.40.03EF0CE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id BF7CD60019B9 for ; Thu, 15 Jul 2021 03:46:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RlVk2C5vivb5uiOtN9+ZTOqv1LuuWnSFLIpXMjTgz7k=; b=H1Mt7U0RaconMyxIEtQt0ZO25I joUGDkgkCgmdQdKMcHMoYZhIV26rssBPUxW72w5s9fyoC8TJY2z3TfHu3Wyc5f2aDNeI4g86qsVf8 tpuY2zX2M7bf3p2Lwf3LHusmIVQkAaa4ApyP+SNHzk6ti1XnlilEo9zYTH8LL2vo2o3OjohBr/pgI iz0G1+TmVVX330mKylhrHsmnXGFbEOfCIPDgKPZoj1jnj2v8+l3GnWAybIxu072DXp/LpEnFqXUiG s9sOtHISeVjOTI6iOjI4G87z7XryTDB71Foe2ISvKM2Wb/RQsA6MR41c468rJEilKBICGaw9iSINS LEVEXI/g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sJX-002ul3-Jx; Thu, 15 Jul 2021 03:45:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 012/138] mm: Handle per-folio private data Date: Thu, 15 Jul 2021 04:34:58 +0100 Message-Id: <20210715033704.692967-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=H1Mt7U0R; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BF7CD60019B9 X-Stat-Signature: kzius1dghdoghmmkhybsgzf35ynwqtss X-HE-Tag: 1626320804-988410 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add folio_get_private() which mirrors page_private() -- ie folio private data is the same as page private data. The only difference is that these return a void * instead of an unsigned long, which matches the majority of users. Turn attach_page_private() into folio_attach_private() and reimplement attach_page_private() as a wrapper. No filesystem which uses page private data currently supports compound pages, so we're free to define the rules. attach_page_private() may only be called on a head page; if you want to add private data to a tail page, you can call set_page_private() directly (and shouldn't increment the page refcount! That should be done when adding private data to the head page / folio). This saves 813 bytes of text with the distro-derived config that I'm testing due to removing the calls to compound_head() in get_page() & put_page(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/mm_types.h | 11 +++++++++ include/linux/pagemap.h | 48 ++++++++++++++++++++++++---------------- 2 files changed, 40 insertions(+), 19 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index f023eaa866fe..c4dd41bb1019 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -309,6 +309,12 @@ static inline atomic_t *compound_pincount_ptr(struct page *page) #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +/* + * page_private can be used on tail pages. However, PagePrivate is only + * checked by the VM on the head page. So page_private on the tail pages + * should be used for data that's ancillary to the head page (eg attaching + * buffer heads to tail pages after attaching buffer heads to the head page) + */ #define page_private(page) ((page)->private) static inline void set_page_private(struct page *page, unsigned long private) @@ -316,6 +322,11 @@ static inline void set_page_private(struct page *page, unsigned long private) page->private = private; } +static inline void *folio_get_private(struct folio *folio) +{ + return folio->private; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index db1726b1bc1c..3279c731ee04 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -184,42 +184,52 @@ static inline bool page_cache_get_speculative(struct page *page) } /** - * attach_page_private - Attach private data to a page. - * @page: Page to attach data to. - * @data: Data to attach to page. + * folio_attach_private - Attach private data to a folio. + * @folio: Folio to attach data to. + * @data: Data to attach to folio. * - * Attaching private data to a page increments the page's reference count. - * The data must be detached before the page will be freed. + * Attaching private data to a folio increments the page's reference count. + * The data must be detached before the folio will be freed. */ -static inline void attach_page_private(struct page *page, void *data) +static inline void folio_attach_private(struct folio *folio, void *data) { - get_page(page); - set_page_private(page, (unsigned long)data); - SetPagePrivate(page); + folio_get(folio); + folio->private = data; + folio_set_private(folio); } /** - * detach_page_private - Detach private data from a page. - * @page: Page to detach data from. + * folio_detach_private - Detach private data from a folio. + * @folio: Folio to detach data from. * - * Removes the data that was previously attached to the page and decrements + * Removes the data that was previously attached to the folio and decrements * the refcount on the page. * - * Return: Data that was attached to the page. + * Return: Data that was attached to the folio. */ -static inline void *detach_page_private(struct page *page) +static inline void *folio_detach_private(struct folio *folio) { - void *data = (void *)page_private(page); + void *data = folio_get_private(folio); - if (!PagePrivate(page)) + if (!folio_test_private(folio)) return NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - put_page(page); + folio_clear_private(folio); + folio->private = NULL; + folio_put(folio); return data; } +static inline void attach_page_private(struct page *page, void *data) +{ + folio_attach_private(page_folio(page), data); +} + +static inline void *detach_page_private(struct page *page) +{ + return folio_detach_private(page_folio(page)); +} + #ifdef CONFIG_NUMA extern struct page *__page_cache_alloc(gfp_t gfp); #else From patchwork Thu Jul 15 03:34:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FFA9C47E48 for ; Thu, 15 Jul 2021 03:47:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9C6260FEB for ; Thu, 15 Jul 2021 03:47:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9C6260FEB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 187A68D001B; Wed, 14 Jul 2021 23:47:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 137D28D000A; Wed, 14 Jul 2021 23:47:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1A8A8D001B; Wed, 14 Jul 2021 23:47:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id CE17F8D000A for ; Wed, 14 Jul 2021 23:47:41 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C103C23E48 for ; Thu, 15 Jul 2021 03:47:40 +0000 (UTC) X-FDA: 78363437880.35.2ECDBFB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 7872D801F25D for ; Thu, 15 Jul 2021 03:47:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bGkCOHXv7RarMhiG1xggwMwUQkMjVX8vOxD9dW3om4g=; b=R9sWIamfE7zJxunZZ/ydeABDxa 4yQbv0pue98BvXaZy1eFfy0+XF/976cXOp5E8J3JbMHq82r+9bdf7Oisv8190jm8lcsH6bPzSLIFk uxVequUVpgZFtlYNPtuR6z0dYEEERUC9BUYZ2jRtDrefgXx+7N8AZGqDOi3fUzsni8Ze0ycUzSF0f Gq5jch0AoPwDQm7X+VpFV/rE08XI1fgnFoUVKl6Pc3cOJS0dUxkceJGNWBGYy7hKKG6Pfshuce6zH hdrwPmCFa1X33eJ2m+IpOvPV7ULHf55vBT2oTkyqdx9E5078JoKORXctSnxwZ1rN6Z5T09JG67Sow ViqBD/lg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sKN-002unf-AO; Thu, 15 Jul 2021 03:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 013/138] mm/filemap: Add folio_index(), folio_file_page() and folio_contains() Date: Thu, 15 Jul 2021 04:34:59 +0100 Message-Id: <20210715033704.692967-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7872D801F25D X-Stat-Signature: tghtekurbjocp699q6rksz3yp4rduwsn Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=R9sWIamf; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626320860-690524 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: folio_index() is the equivalent of page_index() for folios. folio_file_page() is the equivalent of find_subpage(). folio_contains() is the equivalent of thp_contains(). No changes to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 56 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 3279c731ee04..f7c165b5991f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -386,6 +386,62 @@ static inline bool thp_contains(struct page *head, pgoff_t index) return page_index(head) == (index & ~(thp_nr_pages(head) - 1UL)); } +#define swapcache_index(folio) __page_file_index(&(folio)->page) + +/** + * folio_index - File index of a folio. + * @folio: The folio. + * + * For a folio which is either in the page cache or the swap cache, + * return its index within the address_space it belongs to. If you know + * the page is definitely in the page cache, you can look at the folio's + * index directly. + * + * Return: The index (offset in units of pages) of a folio in its file. + */ +static inline pgoff_t folio_index(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return swapcache_index(folio); + return folio->index; +} + +/** + * folio_file_page - The page for a particular index. + * @folio: The folio which contains this index. + * @index: The index we want to look up. + * + * Sometimes after looking up a folio in the page cache, we need to + * obtain the specific page for an index (eg a page fault). + * + * Return: The page containing the file data for this index. + */ +static inline struct page *folio_file_page(struct folio *folio, pgoff_t index) +{ + /* HugeTLBfs indexes the page cache in units of hpage_size */ + if (folio_test_hugetlb(folio)) + return &folio->page; + return folio_page(folio, index & (folio_nr_pages(folio) - 1)); +} + +/** + * folio_contains - Does this folio contain this index? + * @folio: The folio. + * @index: The page index within the file. + * + * Context: The caller should have the page locked in order to prevent + * (eg) shmem from moving the page between the page cache and swap cache + * and changing its index in the middle of the operation. + * Return: true or false. + */ +static inline bool folio_contains(struct folio *folio, pgoff_t index) +{ + /* HugeTLBfs indexes the page cache in units of hpage_size */ + if (folio_test_hugetlb(folio)) + return folio->index == index; + return index - folio_index(folio) < folio_nr_pages(folio); +} + /* * Given the page we found in the page cache, return the page corresponding * to this index in the file From patchwork Thu Jul 15 03:35:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B795CC47E48 for ; Thu, 15 Jul 2021 03:48:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 495B660FF3 for ; Thu, 15 Jul 2021 03:48:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 495B660FF3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 975508D001C; Wed, 14 Jul 2021 23:48:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9253A8D000A; Wed, 14 Jul 2021 23:48:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C75B8D001C; Wed, 14 Jul 2021 23:48:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 5AEC98D000A for ; Wed, 14 Jul 2021 23:48:26 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4AFAB8248047 for ; Thu, 15 Jul 2021 03:48:25 +0000 (UTC) X-FDA: 78363439770.07.AB4339B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 0D9DF400208F for ; Thu, 15 Jul 2021 03:48:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vawLqeT0vEJiZnrPwSOeCWDUYdE4eNiMl7PF9VneWmI=; b=lFD6/ZEMiG55WeBetP3ildreex LGp8SFEBHJvf4vE/sj5ouHiYz/dUbPjMZDW5DEd7XEUdeqb+7uvQYu4EiYA9yfUs+kgjK8jikOUG6 0Pe/jY5T6io6BuqUJD0DZdTsMtkrZsYHlvWQCqHwFLRqZJQnH5zAiajP5s+nN6roM0FvNg4Ad1DQh q+0yAcYzxMIR/Qy3nQZTyijlF8hVFUeEVbFXzRLID6mInBFz6dpjhAlYmJ8m0KL5X5NgTQz4P6q4a DeoS7v1w86CgXvQoQTV7zneGL3Iq4wL5MlRBs5wlfnu0Mpc+FyA3pVZXHm1KYD+RrTia2XGLpXqT/ ql7HAUWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sKv-002uqF-Mr; Thu, 15 Jul 2021 03:47:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 014/138] mm/filemap: Add folio_next_index() Date: Thu, 15 Jul 2021 04:35:00 +0100 Message-Id: <20210715033704.692967-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0D9DF400208F X-Stat-Signature: zyp539hs43k6943rmg9fdarmgwhk8ywf Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="lFD6/ZEM"; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626320904-897539 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This helper returns the page index of the next folio in the file (ie the end of this folio, plus one). No changes to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index f7c165b5991f..bd0e7e91bfd4 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -406,6 +406,17 @@ static inline pgoff_t folio_index(struct folio *folio) return folio->index; } +/** + * folio_next_index - Get the index of the next folio. + * @folio: The current folio. + * + * Return: The index of the folio which follows this folio in the file. + */ +static inline pgoff_t folio_next_index(struct folio *folio) +{ + return folio->index + folio_nr_pages(folio); +} + /** * folio_file_page - The page for a particular index. * @folio: The folio which contains this index. From patchwork Thu Jul 15 03:35:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC6BC07E96 for ; Thu, 15 Jul 2021 03:49:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3AEE2611F1 for ; Thu, 15 Jul 2021 03:49:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3AEE2611F1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 889AE8D001D; Wed, 14 Jul 2021 23:49:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 860618D000A; Wed, 14 Jul 2021 23:49:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 728908D001D; Wed, 14 Jul 2021 23:49:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 503728D000A for ; Wed, 14 Jul 2021 23:49:41 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 373CF183956EB for ; Thu, 15 Jul 2021 03:49:40 +0000 (UTC) X-FDA: 78363442920.03.76319F7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id E7B7DB000299 for ; Thu, 15 Jul 2021 03:49:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jhieiK0e9kwbSC0V6q3aiSMkZCoboL+CZsQ228vwdU4=; b=LxZuI/QR9IuYa+LwfxHpkY4YcQ widkRsmj9NmBwuwF6rFJIR3GxkmlCzXeREh/EQhw0TStgk2RLKYSvl8/957+HRkrKzm6f1Uwz4DLO OQ4DiLwKuhTx3+4gq6vKHssVbm0yQRlflgZQXaH7wuJU2oqc37I9nWk6cZ+T67anJO6YFjfF1S2zY hjeykIoDsbGJMgrbRUWIsNbd3R8+2DKpoHnX9X/+d0fgTASrAbxlxj/4sOO8PgqczVGZZP6SUUZhq p8SJfmpQp8B6HAKDfrq9Rg+PAW/imeo6i8PsWWTilQC4MIj1JXNT7JWakD0+DVsofQZjhajOf4SLA PcK618FQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sLi-002uvu-5t; Thu, 15 Jul 2021 03:48:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 015/138] mm/filemap: Add folio_pos() and folio_file_pos() Date: Thu, 15 Jul 2021 04:35:01 +0100 Message-Id: <20210715033704.692967-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E7B7DB000299 X-Stat-Signature: bjkcyrmgexwgkzzcrn4m19d5xsetzrqg Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="LxZuI/QR"; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626320979-528100 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are just wrappers around page_offset() and page_file_offset() respectively. No change to generated code. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells --- include/linux/pagemap.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bd0e7e91bfd4..aa71fa82d6be 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -562,6 +562,27 @@ static inline loff_t page_file_offset(struct page *page) return ((loff_t)page_index(page)) << PAGE_SHIFT; } +/** + * folio_pos - Returns the byte position of this folio in its file. + * @folio: The folio. + */ +static inline loff_t folio_pos(struct folio *folio) +{ + return page_offset(&folio->page); +} + +/** + * folio_file_pos - Returns the byte position of this folio in its file. + * @folio: The folio. + * + * This differs from folio_pos() for folios which belong to a swap file. + * NFS is the only filesystem today which needs to use folio_file_pos(). + */ +static inline loff_t folio_file_pos(struct folio *folio) +{ + return page_file_offset(&folio->page); +} + extern pgoff_t linear_hugepage_index(struct vm_area_struct *vma, unsigned long address); From patchwork Thu Jul 15 03:35:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A23DC07E96 for ; Thu, 15 Jul 2021 03:50:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2017261260 for ; Thu, 15 Jul 2021 03:50:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2017261260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 728D88D001E; Wed, 14 Jul 2021 23:50:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D8908D000A; Wed, 14 Jul 2021 23:50:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 552828D001E; Wed, 14 Jul 2021 23:50:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2C7358D000A for ; Wed, 14 Jul 2021 23:50:38 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 15D8E8248047 for ; Thu, 15 Jul 2021 03:50:37 +0000 (UTC) X-FDA: 78363445314.21.223C31F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id B29A03000185 for ; Thu, 15 Jul 2021 03:50:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BFw0WPKenncuZcjP2zbo1C1YnTG5ZSQTWKRn0z6J/O8=; b=VVZjwhdTKE43Q+N4iYbnT+b4gF Mx1OOfwEQyE1hKhsIZHvCTlO9Wu2Ceaxr7fTkkWApW/IG5kQCI1YgPjY858bRDZBZ2cZLQTDzCWuw Y4wzrECtbl/KVXmPLW0sMuTbCogjcx55LRjHTJMFKrXPgRvdWKksqGNLv+aEkO7A0ySxlmpiO73Zs oADYBZHrAbWuTSwYSYRnDC99cRzMJDgGbnz6H73CFEV12j7JEtwf5rdEQ5hiGXADRrFVeKEEUORx6 Tmm2/PTbpRzOUWgEkFoXZJIGh2II04JZ2NjkWReN0JSEgFoqCiiMgI0jBsJf74nbIYiS8W+jXrGSD sQn8KnYA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sMO-002uyS-HQ; Thu, 15 Jul 2021 03:48:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 016/138] mm/util: Add folio_mapping() and folio_file_mapping() Date: Thu, 15 Jul 2021 04:35:02 +0100 Message-Id: <20210715033704.692967-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B29A03000185 X-Stat-Signature: gi9er3ofzx4xsip4g4cg9oht51qk3q1n Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VVZjwhdT; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321036-820513 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalent of page_mapping() and page_file_mapping(). Add an out-of-line page_mapping() wrapper around folio_mapping() in order to prevent the page_folio() call from bloating every caller of page_mapping(). Adjust page_file_mapping() and page_mapping_file() to use folios internally. Rename __page_file_mapping() to swapcache_mapping() and change it to take a folio. This ends up saving 122 bytes of text overall. folio_mapping() is 45 bytes shorter than page_mapping() was, but the new page_mapping() wrapper is 30 bytes. The major reduction is a few bytes less in dozens of nfs functions (which call page_file_mapping()). Most of these appear to be a slight change in gcc's register allocation decisions, which allow: 48 8b 56 08 mov 0x8(%rsi),%rdx 48 8d 42 ff lea -0x1(%rdx),%rax 83 e2 01 and $0x1,%edx 48 0f 44 c6 cmove %rsi,%rax to become: 48 8b 46 08 mov 0x8(%rsi),%rax 48 8d 78 ff lea -0x1(%rax),%rdi a8 01 test $0x1,%al 48 0f 44 fe cmove %rsi,%rdi for a reduction of a single byte. Once the NFS client is converted to use folios, this entire sequence will disappear. Also add folio_mapping() documentation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells --- Documentation/core-api/mm-api.rst | 2 ++ include/linux/mm.h | 14 ------------- include/linux/pagemap.h | 35 +++++++++++++++++++++++++++++-- include/linux/swap.h | 6 ++++++ mm/Makefile | 2 +- mm/folio-compat.c | 13 ++++++++++++ mm/swapfile.c | 8 +++---- mm/util.c | 30 +++++++++++++++----------- 8 files changed, 77 insertions(+), 33 deletions(-) create mode 100644 mm/folio-compat.c diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst index 5c459ee2acce..dcce6605947a 100644 --- a/Documentation/core-api/mm-api.rst +++ b/Documentation/core-api/mm-api.rst @@ -100,3 +100,5 @@ More Memory Management Functions :internal: .. kernel-doc:: include/linux/page_ref.h .. kernel-doc:: include/linux/mmzone.h +.. kernel-doc:: mm/util.c + :functions: folio_mapping diff --git a/include/linux/mm.h b/include/linux/mm.h index 788fbc4cde0c..9d28f5b2e983 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1753,19 +1753,6 @@ void page_address_init(void); extern void *page_rmapping(struct page *page); extern struct anon_vma *page_anon_vma(struct page *page); -extern struct address_space *page_mapping(struct page *page); - -extern struct address_space *__page_file_mapping(struct page *); - -static inline -struct address_space *page_file_mapping(struct page *page) -{ - if (unlikely(PageSwapCache(page))) - return __page_file_mapping(page); - - return page->mapping; -} - extern pgoff_t __page_file_index(struct page *page); /* @@ -1780,7 +1767,6 @@ static inline pgoff_t page_index(struct page *page) } bool page_mapped(struct page *page); -struct address_space *page_mapping(struct page *page); /* * Return true only if the page has been allocated with diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index aa71fa82d6be..a0925a89ba11 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -162,14 +162,45 @@ static inline void filemap_nr_thps_dec(struct address_space *mapping) void release_pages(struct page **pages, int nr); +struct address_space *page_mapping(struct page *); +struct address_space *folio_mapping(struct folio *); +struct address_space *swapcache_mapping(struct folio *); + +/** + * folio_file_mapping - Find the mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Folios in the swap cache return the mapping of the + * swap file or swap device where the data is stored. This is different + * from the mapping returned by folio_mapping(). The only reason to + * use it is if, like NFS, you return 0 from ->activate_swapfile. + * + * Do not call this for folios which aren't in the page cache or swap cache. + */ +static inline struct address_space *folio_file_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return swapcache_mapping(folio); + + return folio->mapping; +} + +static inline struct address_space *page_file_mapping(struct page *page) +{ + return folio_file_mapping(page_folio(page)); +} + /* * For file cache pages, return the address_space, otherwise return NULL */ static inline struct address_space *page_mapping_file(struct page *page) { - if (unlikely(PageSwapCache(page))) + struct folio *folio = page_folio(page); + + if (unlikely(folio_test_swapcache(folio))) return NULL; - return page_mapping(page); + return folio_mapping(folio); } static inline bool page_cache_add_speculative(struct page *page, int count) diff --git a/include/linux/swap.h b/include/linux/swap.h index 6f5a43251593..3d3d85354026 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -320,6 +320,12 @@ struct vma_swap_readahead { #endif }; +static inline swp_entry_t folio_swap_entry(struct folio *folio) +{ + swp_entry_t entry = { .val = page_private(&folio->page) }; + return entry; +} + /* linux/mm/workingset.c */ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); diff --git a/mm/Makefile b/mm/Makefile index e3436741d539..d7488bcbbb2b 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -46,7 +46,7 @@ mmu-$(CONFIG_MMU) += process_vm_access.o endif obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ - maccess.o page-writeback.o \ + maccess.o page-writeback.o folio-compat.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ diff --git a/mm/folio-compat.c b/mm/folio-compat.c new file mode 100644 index 000000000000..5e107aa30a62 --- /dev/null +++ b/mm/folio-compat.c @@ -0,0 +1,13 @@ +/* + * Compatibility functions which bloat the callers too much to make inline. + * All of the callers of these functions should be converted to use folios + * eventually. + */ + +#include + +struct address_space *page_mapping(struct page *page) +{ + return folio_mapping(page_folio(page)); +} +EXPORT_SYMBOL(page_mapping); diff --git a/mm/swapfile.c b/mm/swapfile.c index 1e07d1c776f2..3a6c094310da 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3528,13 +3528,13 @@ struct swap_info_struct *page_swap_info(struct page *page) } /* - * out-of-line __page_file_ methods to avoid include hell. + * out-of-line methods to avoid include hell. */ -struct address_space *__page_file_mapping(struct page *page) +struct address_space *swapcache_mapping(struct folio *folio) { - return page_swap_info(page)->swap_file->f_mapping; + return page_swap_info(&folio->page)->swap_file->f_mapping; } -EXPORT_SYMBOL_GPL(__page_file_mapping); +EXPORT_SYMBOL_GPL(swapcache_mapping); pgoff_t __page_file_index(struct page *page) { diff --git a/mm/util.c b/mm/util.c index 9043d03750a7..1cde6218d6d1 100644 --- a/mm/util.c +++ b/mm/util.c @@ -686,30 +686,36 @@ struct anon_vma *page_anon_vma(struct page *page) return __page_rmapping(page); } -struct address_space *page_mapping(struct page *page) +/** + * folio_mapping - Find the mapping where this folio is stored. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Folios in the swap cache return the swap mapping + * this page is stored in (which is different from the mapping for the + * swap file or swap device where the data is stored). + * + * You can call this for folios which aren't in the swap cache or page + * cache and it will return NULL. + */ +struct address_space *folio_mapping(struct folio *folio) { struct address_space *mapping; - page = compound_head(page); - /* This happens if someone calls flush_dcache_page on slab page */ - if (unlikely(PageSlab(page))) + if (unlikely(folio_test_slab(folio))) return NULL; - if (unlikely(PageSwapCache(page))) { - swp_entry_t entry; - - entry.val = page_private(page); - return swap_address_space(entry); - } + if (unlikely(folio_test_swapcache(folio))) + return swap_address_space(folio_swap_entry(folio)); - mapping = page->mapping; + mapping = folio->mapping; if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } -EXPORT_SYMBOL(page_mapping); +EXPORT_SYMBOL(folio_mapping); /* Slow path of page_mapcount() for compound pages */ int __page_mapcount(struct page *page) From patchwork Thu Jul 15 03:35:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDE4C07E96 for ; Thu, 15 Jul 2021 03:52:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 02F4C61360 for ; Thu, 15 Jul 2021 03:52:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 02F4C61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 551708D001F; Wed, 14 Jul 2021 23:52:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 528418D000A; Wed, 14 Jul 2021 23:52:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9B48D001F; Wed, 14 Jul 2021 23:52:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 102578D000A for ; Wed, 14 Jul 2021 23:52:12 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F0F3423E4C for ; Thu, 15 Jul 2021 03:52:10 +0000 (UTC) X-FDA: 78363449220.12.2208CBB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id A9F2390001BD for ; Thu, 15 Jul 2021 03:52:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yJjcL8JeT4ttM0WfFdjzDq9LTGoepY5vhGwfSSvIaTY=; b=DiLMs0A5FFEsfF4hgDykRoBmm7 taepqtiDtpuKFLEXllIuWLXq//zU4sr4XDmW8Ax+/UxRfjCTzySv65UmzausBOCeKGwGbouZS8n/z kWZupXDUO+UIOujntqsGAhDVnFtPeK7+l/jqPJHIvsQ889j4bseBCNmhhCu3IcWQf/nVhuJHXxSfJ 3isEj0IEx4+I5l7yVNfsucMFPC/2gIrVbtC5A/h7tDpXs6YmuhO2I0FfoNFViDWHAVRcO8R2X38If qJaoMXTTvDIJblIuLEvt3AuzXzrbOzJfw8rxJlmSLaEiAvmIESkPRdfsAvnVZYxFWcnT55Yha2Ppd iXBVchdA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sNW-002v1u-Hs; Thu, 15 Jul 2021 03:49:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , William Kucharski , David Howells Subject: [PATCH v14 017/138] mm/filemap: Add folio_unlock() Date: Thu, 15 Jul 2021 04:35:03 +0100 Message-Id: <20210715033704.692967-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DiLMs0A5; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A9F2390001BD X-Stat-Signature: eoimt3euebw8y5743xrzis8qhtbjgnk6 X-HE-Tag: 1626321130-382662 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert unlock_page() to call folio_unlock(). By using a folio we avoid a call to compound_head(). This shortens the function from 39 bytes to 25 and removes 4 instructions on x86-64. Because we still have unlock_page(), it's a net increase of 16 bytes of text for the kernel as a whole, but any path that uses folio_unlock() will execute 4 fewer instructions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 29 ++++++++++++----------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 20 insertions(+), 18 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a0925a89ba11..a13edc7a2916 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -658,7 +658,8 @@ extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); -extern void unlock_page(struct page *page); +void unlock_page(struct page *page); +void folio_unlock(struct folio *folio); /* * Return true if the page was successfully locked diff --git a/mm/filemap.c b/mm/filemap.c index 634adeacc4c1..1af67ef94e4c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1435,29 +1435,24 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile void *mem #endif /** - * unlock_page - unlock a locked page - * @page: the page + * folio_unlock - Unlock a locked folio. + * @folio: The folio. * - * Unlocks the page and wakes up sleepers in wait_on_page_locked(). - * Also wakes sleepers in wait_on_page_writeback() because the wakeup - * mechanism between PageLocked pages and PageWriteback pages is shared. - * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep. + * Unlocks the folio and wakes up any thread sleeping on the page lock. * - * Note that this depends on PG_waiters being the sign bit in the byte - * that contains PG_locked - thus the BUILD_BUG_ON(). That allows us to - * clear the PG_locked bit and test PG_waiters at the same time fairly - * portably (architectures that do LL/SC can test any bit, while x86 can - * test the sign bit). + * Context: May be called from interrupt or process context. May not be + * called from NMI context. */ -void unlock_page(struct page *page) +void folio_unlock(struct folio *folio) { + /* Bit 7 allows x86 to check the byte's sign bit */ BUILD_BUG_ON(PG_waiters != 7); - page = compound_head(page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (clear_bit_unlock_is_negative_byte(PG_locked, &page->flags)) - wake_up_page_bit(page, PG_locked); + BUILD_BUG_ON(PG_locked > 7); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) + wake_up_page_bit(&folio->page, PG_locked); } -EXPORT_SYMBOL(unlock_page); +EXPORT_SYMBOL(folio_unlock); /** * end_page_private_2 - Clear PG_private_2 and release any waiters diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 5e107aa30a62..91b3d00a92f7 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -11,3 +11,9 @@ struct address_space *page_mapping(struct page *page) return folio_mapping(page_folio(page)); } EXPORT_SYMBOL(page_mapping); + +void unlock_page(struct page *page) +{ + return folio_unlock(page_folio(page)); +} +EXPORT_SYMBOL(unlock_page); From patchwork Thu Jul 15 03:35:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 545E4C47E4B for ; Thu, 15 Jul 2021 03:53:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F103C6102A for ; Thu, 15 Jul 2021 03:53:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F103C6102A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 43EF88D0020; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4159E8D000A; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B7DF8D0020; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 086F48D000A for ; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E045B8248047 for ; Thu, 15 Jul 2021 03:53:23 +0000 (UTC) X-FDA: 78363452286.14.4BDE514 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 9DA3E90000A5 for ; Thu, 15 Jul 2021 03:53:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jaOH2K1i0UCcCHDJat4DXALtWZjmQx4kJWboCw70cWs=; b=lt4vCVlw0S7uuQ78g/lAxdVJaf Wov74cv3q+YJqfUI+kcnldSH8RSeXI5Ybh035+BMDSixxX8ffPuM9nvjtOA7bBPQ02ZbxS6FUnW9G tO/QlWszNc+JDETEJlwv3egWWe9qI86P3EKuIwhoc0iNLm+9LdL3GmmFAsTg/2Rm4C7a5kCq3udlY LAsoF9l3xaXZyFABRVdTPQ4yf3OaewLoZuLwRL2lbxwLBkoWGn5FuSoF+81i2DS0QIoAYLHe8quk2 wjwXmTgjhp05h+gWP7hMagN7fr6uY/9eibqm0mh/Uc9WUXY3QoCnfzC4dOOBXA+2kkiz2wDd8+w8q vDdaGwIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sOn-002v7V-If; Thu, 15 Jul 2021 03:51:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 018/138] mm/filemap: Add folio_lock() Date: Thu, 15 Jul 2021 04:35:04 +0100 Message-Id: <20210715033704.692967-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lt4vCVlw; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9DA3E90000A5 X-Stat-Signature: any71fz6by6tgospudpgwfopzn9gqek4 X-HE-Tag: 1626321203-902349 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page() but for use by callers who know they have a folio. Convert __lock_page() to be __folio_lock(). This saves one call to compound_head() per contended call to lock_page(). Saves 455 bytes of text; mostly from improved register allocation and inlining decisions. __folio_lock is 59 bytes while __lock_page was 79. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 24 +++++++++++++++++++----- mm/filemap.c | 29 +++++++++++++++-------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a13edc7a2916..c3673c55125b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -653,7 +653,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, return true; } -extern void __lock_page(struct page *page); +void __folio_lock(struct folio *folio); extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, @@ -661,13 +661,24 @@ extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, void unlock_page(struct page *page); void folio_unlock(struct folio *folio); +static inline bool folio_trylock(struct folio *folio) +{ + return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); +} + /* * Return true if the page was successfully locked */ static inline int trylock_page(struct page *page) { - page = compound_head(page); - return (likely(!test_and_set_bit_lock(PG_locked, &page->flags))); + return folio_trylock(page_folio(page)); +} + +static inline void folio_lock(struct folio *folio) +{ + might_sleep(); + if (!folio_trylock(folio)) + __folio_lock(folio); } /* @@ -675,9 +686,12 @@ static inline int trylock_page(struct page *page) */ static inline void lock_page(struct page *page) { + struct folio *folio; might_sleep(); - if (!trylock_page(page)) - __lock_page(page); + + folio = page_folio(page); + if (!folio_trylock(folio)) + __folio_lock(folio); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 1af67ef94e4c..95f89656f126 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1187,7 +1187,7 @@ static void wake_up_page(struct page *page, int bit) */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like - * __lock_page() waiting on then setting PG_locked. + * __folio_lock() waiting on then setting PG_locked. */ SHARED, /* Hold ref to page and check the bit when woken, like * wait_on_page_writeback() waiting on PG_writeback. @@ -1578,17 +1578,16 @@ void page_endio(struct page *page, bool is_write, int err) EXPORT_SYMBOL_GPL(page_endio); /** - * __lock_page - get a lock on the page, assuming we need to sleep to get it - * @__page: the page to lock + * __folio_lock - Get a lock on the folio, assuming we need to sleep to get it. + * @folio: The folio to lock */ -void __lock_page(struct page *__page) +void __folio_lock(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } -EXPORT_SYMBOL(__lock_page); +EXPORT_SYMBOL(__folio_lock); int __lock_page_killable(struct page *__page) { @@ -1663,10 +1662,10 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, return 0; } } else { - __lock_page(page); + __folio_lock(page_folio(page)); } - return 1; + return 1; } /** @@ -2837,7 +2836,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, struct file **fpin) { - if (trylock_page(page)) + struct folio *folio = page_folio(page); + + if (folio_trylock(folio)) return 1; /* @@ -2850,7 +2851,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(page)) { + if (__lock_page_killable(&folio->page)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals @@ -2862,11 +2863,11 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, return 0; } } else - __lock_page(page); + __folio_lock(folio); + return 1; } - /* * Synchronous readahead happens when we don't even find a page in the page * cache at all. We don't want to perform IO under the mmap sem, so if we have From patchwork Thu Jul 15 03:35:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D114C07E96 for ; Thu, 15 Jul 2021 03:54:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 19BEA601FC for ; Thu, 15 Jul 2021 03:54:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19BEA601FC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6D5338D0021; Wed, 14 Jul 2021 23:54:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 684988D000A; Wed, 14 Jul 2021 23:54:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5266D8D0021; Wed, 14 Jul 2021 23:54:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id 2F4368D000A for ; Wed, 14 Jul 2021 23:54:37 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1741E8248047 for ; Thu, 15 Jul 2021 03:54:36 +0000 (UTC) X-FDA: 78363455352.30.BF86D26 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id C165D6001996 for ; Thu, 15 Jul 2021 03:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MWYSNZiHYQiAgtd69VARn5iqiAkOypWCG967HpIKVR4=; b=S8KgRfqGyeVpjVF4yvaIjQoRE0 dijKJD0cg1xdbOqGcR55M9zdjQLM7MwIekNMbJczvuxp368AWQ9kcsQflwz6eEeS2riH3W7AE2qyK v/xWeFUsNEPHtewINXUpnhckB4U58sobap8cVnvNpk8AAnjjrmlCjz+wrSDvN4zEv5VjpiMc7ApJL 18x3N9PxykVrgLVlHnajKWUlr/Kj+bCOXzAOJw9KTz0JTZzuvcUPloAOPJpTXH342XgL/Vd/roFim pYZL4VHYVkLK6UpDG4cjJJb1hruJjCIiVTUOWJuTW0JQPf19ZHqGGP3hK7pkgv+tEUzfa5Q69hVyk dmJBXKvQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sQC-002vBC-O6; Thu, 15 Jul 2021 03:52:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski Subject: [PATCH v14 019/138] mm/filemap: Add folio_lock_killable() Date: Thu, 15 Jul 2021 04:35:05 +0100 Message-Id: <20210715033704.692967-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C165D6001996 X-Stat-Signature: qyaoznebowrgcgaij94rkirq11s8hxf9 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=S8KgRfqG; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321275-972136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page_killable() but for use by callers who know they have a folio. Convert __lock_page_killable() to be __folio_lock_killable(). This saves one call to compound_head() per contended call to lock_page_killable(). __folio_lock_killable() is 19 bytes smaller than __lock_page_killable() was. filemap_fault() shrinks by 74 bytes and __lock_page_or_retry() shrinks by 71 bytes. That's a total of 164 bytes of text saved. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Acked-by: Mike Rapoport Reviewed-by: David Howells --- include/linux/pagemap.h | 15 ++++++++++----- mm/filemap.c | 17 +++++++++-------- 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c3673c55125b..88727c74e059 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -654,7 +654,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, } void __folio_lock(struct folio *folio); -extern int __lock_page_killable(struct page *page); +int __folio_lock_killable(struct folio *folio); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); @@ -694,6 +694,14 @@ static inline void lock_page(struct page *page) __folio_lock(folio); } +static inline int folio_lock_killable(struct folio *folio) +{ + might_sleep(); + if (!folio_trylock(folio)) + return __folio_lock_killable(folio); + return 0; +} + /* * lock_page_killable is like lock_page but can be interrupted by fatal * signals. It returns 0 if it locked the page and -EINTR if it was @@ -701,10 +709,7 @@ static inline void lock_page(struct page *page) */ static inline int lock_page_killable(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - return __lock_page_killable(page); - return 0; + return folio_lock_killable(page_folio(page)); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 95f89656f126..962db5c38cd7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1589,14 +1589,13 @@ void __folio_lock(struct folio *folio) } EXPORT_SYMBOL(__folio_lock); -int __lock_page_killable(struct page *__page) +int __folio_lock_killable(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, EXCLUSIVE); } -EXPORT_SYMBOL_GPL(__lock_page_killable); +EXPORT_SYMBOL_GPL(__folio_lock_killable); int __lock_page_async(struct page *page, struct wait_page_queue *wait) { @@ -1638,6 +1637,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait) int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio = page_folio(page); + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released @@ -1656,13 +1657,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, if (flags & FAULT_FLAG_KILLABLE) { int ret; - ret = __lock_page_killable(page); + ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); return 0; } } else { - __folio_lock(page_folio(page)); + __folio_lock(folio); } return 1; @@ -2851,7 +2852,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(&folio->page)) { + if (__folio_lock_killable(folio)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals From patchwork Thu Jul 15 03:35:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2028C47E48 for ; Thu, 15 Jul 2021 03:54:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8FBE461289 for ; Thu, 15 Jul 2021 03:54:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8FBE461289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E282B8D0022; Wed, 14 Jul 2021 23:54:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DD69B8D000A; Wed, 14 Jul 2021 23:54:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C784A8D0022; Wed, 14 Jul 2021 23:54:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id A521B8D000A for ; Wed, 14 Jul 2021 23:54:58 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8DF218248047 for ; Thu, 15 Jul 2021 03:54:57 +0000 (UTC) X-FDA: 78363456234.35.0ECCF4C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 4CD55801F25A for ; Thu, 15 Jul 2021 03:54:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Me7+Wbns9PvazlfDbJDzSquWzNvF9/VtGMA22JOaiGk=; b=f1kA3HLAo3ZVGSEy5b+eOCzJ/t bbaBXhpiyCF0jDX4N4QUZ6lpVFpFvK2Rav19eHY0C8ybWm/DW1IKvfnPc133qr2NaJfOPYmuSrw3U 5mr3A64Pq456+f7a//E63qtLRzI04DCS7zq0Lgq//isqxQ8IIU7N8J9Z3ypHpokX54alEZv0aSLk2 Av6Bt+yQn/3+RrIRIli2poezYA2KRRwAEgOZowLDNhGxotYv5BToliRuHBhqEd07ZF9BrbDksq2B/ WwGMhaQO7abAOlnAdf16trFOFReIeTm/nvJdBBbBfhlFnNwkeoJX0mdyLREcp2U8y1xPUg6OCplyW c/qiUMYA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sRY-002vEv-Km; Thu, 15 Jul 2021 03:54:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 020/138] mm/filemap: Add __folio_lock_async() Date: Thu, 15 Jul 2021 04:35:06 +0100 Message-Id: <20210715033704.692967-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=f1kA3HLA; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4CD55801F25A X-Stat-Signature: 6dgy4t1miyttshxfn9pkhc4kknfbyqyy X-HE-Tag: 1626321297-207391 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There aren't any actual callers of lock_page_async(), so remove it. Convert filemap_update_page() to call __folio_lock_async(). __folio_lock_async() is 21 bytes smaller than __lock_page_async(), but the real savings come from using a folio in filemap_update_page(), shrinking it from 515 bytes to 404 bytes, saving 110 bytes. The text shrinks by 132 bytes in total. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- fs/io_uring.c | 2 +- include/linux/pagemap.h | 17 ----------------- mm/filemap.c | 31 ++++++++++++++++--------------- 3 files changed, 17 insertions(+), 33 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index d94fb5835a20..7e30c7c361e6 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3149,7 +3149,7 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) } /* - * This is our waitqueue callback handler, registered through lock_page_async() + * This is our waitqueue callback handler, registered through __folio_lock_async() * when we initially tried to do the IO with the iocb armed our waitqueue. * This gets called when the page is unlocked, and we generally expect that to * happen when the page IO is completed and the page is now uptodate. This will diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 88727c74e059..6f631a3e42dc 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -655,7 +655,6 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); void unlock_page(struct page *page); @@ -712,22 +711,6 @@ static inline int lock_page_killable(struct page *page) return folio_lock_killable(page_folio(page)); } -/* - * lock_page_async - Lock the page, unless this would block. If the page - * is already locked, then queue a callback when the page becomes unlocked. - * This callback can then retry the operation. - * - * Returns 0 if the page is locked successfully, or -EIOCBQUEUED if the page - * was already locked and the callback defined in 'wait' was queued. - */ -static inline int lock_page_async(struct page *page, - struct wait_page_queue *wait) -{ - if (!trylock_page(page)) - return __lock_page_async(page, wait); - return 0; -} - /* * lock_page_or_retry - Lock the page, unless this would block and the * caller indicated that it can handle a retry. diff --git a/mm/filemap.c b/mm/filemap.c index 962db5c38cd7..c97b804811fc 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1597,18 +1597,18 @@ int __folio_lock_killable(struct folio *folio) } EXPORT_SYMBOL_GPL(__folio_lock_killable); -int __lock_page_async(struct page *page, struct wait_page_queue *wait) +static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) { - struct wait_queue_head *q = page_waitqueue(page); + struct wait_queue_head *q = page_waitqueue(&folio->page); int ret = 0; - wait->page = page; + wait->page = &folio->page; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); __add_wait_queue_entry_tail(q, &wait->wait); - SetPageWaiters(page); - ret = !trylock_page(page); + folio_set_waiters(folio); + ret = !folio_trylock(folio); /* * If we were successful now, we know we're still on the * waitqueue as we're still under the lock. This means it's @@ -2381,41 +2381,42 @@ static int filemap_update_page(struct kiocb *iocb, struct address_space *mapping, struct iov_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); int error; - if (!trylock_page(page)) { + if (!folio_trylock(folio)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO)) return -EAGAIN; if (!(iocb->ki_flags & IOCB_WAITQ)) { - put_and_wait_on_page_locked(page, TASK_KILLABLE); + put_and_wait_on_page_locked(&folio->page, TASK_KILLABLE); return AOP_TRUNCATED_PAGE; } - error = __lock_page_async(page, iocb->ki_waitq); + error = __folio_lock_async(folio, iocb->ki_waitq); if (error) return error; } - if (!page->mapping) + if (!folio->mapping) goto truncated; error = 0; - if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page)) + if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, &folio->page)) goto unlock; error = -EAGAIN; if (iocb->ki_flags & (IOCB_NOIO | IOCB_NOWAIT | IOCB_WAITQ)) goto unlock; - error = filemap_read_page(iocb->ki_filp, mapping, page); + error = filemap_read_page(iocb->ki_filp, mapping, &folio->page); if (error == AOP_TRUNCATED_PAGE) - put_page(page); + folio_put(folio); return error; truncated: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return AOP_TRUNCATED_PAGE; unlock: - unlock_page(page); + folio_unlock(folio); return error; } From patchwork Thu Jul 15 03:35:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DF6BC07E96 for ; Thu, 15 Jul 2021 03:56:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C93E661279 for ; Thu, 15 Jul 2021 03:56:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C93E661279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 17B588D0023; Wed, 14 Jul 2021 23:56:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 12B6C8D000A; Wed, 14 Jul 2021 23:56:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F35C98D0023; Wed, 14 Jul 2021 23:56:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id D0E858D000A for ; Wed, 14 Jul 2021 23:56:16 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BBA6F16910 for ; Thu, 15 Jul 2021 03:56:15 +0000 (UTC) X-FDA: 78363459510.25.B2DCBC7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 71C5360019B5 for ; Thu, 15 Jul 2021 03:56:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hXjTMkAJddA0oyzU0dvuCZcSD0ThJT1ENMn6/tz7G2w=; b=Wlvaij9nztLyBvJHijaPz57eVG cIfUjlvVteyGjxWhPRiRRrvNQs+G9jaEIQn+pYzmZyYQ2Q6nm0aaMO6qPqyZ852uFYdB0o5vDeGup h3h0kP8ww3rZJZbbEJR7GqAPArcuYL6o9MfHtypPyy+9sY0Rmv+Y49FZMZFPKDaTG4uCjpTXhYc4K +JWLL1H6A+cFQl3sdW1ke7kQDOXOFEZXAtXVKOH4Q+vY8jo0n4mPr9b9DQ4+c+nkhSXNTKwfUqveE W61enjBR9VT1Kx6VTlJE4szDbu/bNRC2EK5FJsGKBQHswI7OKAgRTiyMgerNgvpwXONXJct8VweUW pQi7y3hw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sSI-002vHy-Hk; Thu, 15 Jul 2021 03:54:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 021/138] mm/filemap: Add folio_wait_locked() Date: Thu, 15 Jul 2021 04:35:07 +0100 Message-Id: <20210715033704.692967-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Wlvaij9n; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 74dsm3qxom9bu47ftqihnwytjgnztqkm X-Rspamd-Queue-Id: 71C5360019B5 X-HE-Tag: 1626321375-426781 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also add folio_wait_locked_killable(). Turn wait_on_page_locked() and wait_on_page_locked_killable() into wrappers. This eliminates a call to compound_head() from each call-site, reducing text size by 193 bytes for me. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 26 ++++++++++++++++++-------- mm/filemap.c | 4 ++-- 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6f631a3e42dc..03fea8bbfd8e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -733,23 +733,33 @@ extern void wait_on_page_bit(struct page *page, int bit_nr); extern int wait_on_page_bit_killable(struct page *page, int bit_nr); /* - * Wait for a page to be unlocked. + * Wait for a folio to be unlocked. * - * This must be called with the caller "holding" the page, - * ie with increased "page->count" so that the page won't + * This must be called with the caller "holding" the folio, + * ie with increased "page->count" so that the folio won't * go away during the wait.. */ +static inline void folio_wait_locked(struct folio *folio) +{ + if (folio_test_locked(folio)) + wait_on_page_bit(&folio->page, PG_locked); +} + +static inline int folio_wait_locked_killable(struct folio *folio) +{ + if (!folio_test_locked(folio)) + return 0; + return wait_on_page_bit_killable(&folio->page, PG_locked); +} + static inline void wait_on_page_locked(struct page *page) { - if (PageLocked(page)) - wait_on_page_bit(compound_head(page), PG_locked); + folio_wait_locked(page_folio(page)); } static inline int wait_on_page_locked_killable(struct page *page) { - if (!PageLocked(page)) - return 0; - return wait_on_page_bit_killable(compound_head(page), PG_locked); + return folio_wait_locked_killable(page_folio(page)); } int put_and_wait_on_page_locked(struct page *page, int state); diff --git a/mm/filemap.c b/mm/filemap.c index c97b804811fc..04fb4a84cf0d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1649,9 +1649,9 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) - wait_on_page_locked_killable(page); + folio_wait_locked_killable(folio); else - wait_on_page_locked(page); + folio_wait_locked(folio); return 0; } if (flags & FAULT_FLAG_KILLABLE) { From patchwork Thu Jul 15 03:35:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49BCDC47E48 for ; Thu, 15 Jul 2021 03:56:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F317661360 for ; Thu, 15 Jul 2021 03:56:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F317661360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D6438D0024; Wed, 14 Jul 2021 23:56:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 486068D000A; Wed, 14 Jul 2021 23:56:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 327948D0024; Wed, 14 Jul 2021 23:56:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 109D88D000A for ; Wed, 14 Jul 2021 23:56:59 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F356E22ADD for ; Thu, 15 Jul 2021 03:56:57 +0000 (UTC) X-FDA: 78363461274.06.D8783BE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id A87995012D4F for ; Thu, 15 Jul 2021 03:56:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VFl3zrR+bFgfKhJ6ajFoVdpV67mhNLeF8WnzvdQ2QpU=; b=BkZUJad2D6CLWbPugbVe9ZDjCu En99eHdadccQdbtGUzClnrSXaZcsVkp5Bl9710HYI261Jh9cN2QCWbbqSH7nPmt2U+jZHPXx+qwst Ji7JI3IJjNKwQIG0ZBxW0f3dFF6Z21H/uSDLT6goPydbMoMJbbk6O2Z7zslAAAmxH4GRGi33CqCKP 3zUlqOtgNlYDJR1nDSOzZeWD/XstF8kJS60ZD85QHCHgDZ/dSDUNsIF+gNzt3D8AQ5yymZPyUz+G2 mfzfcvZ2kJes3Rk9cSb1mbaU/zeEbG+yEFumDOv3deGZ/IxmKVJqJcblC9sq3FT+PJFqkChoZJUgW 5CFsIVJQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sSz-002vKP-43; Thu, 15 Jul 2021 03:55:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , William Kucharski Subject: [PATCH v14 022/138] mm/filemap: Add __folio_lock_or_retry() Date: Thu, 15 Jul 2021 04:35:08 +0100 Message-Id: <20210715033704.692967-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BkZUJad2; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 1asuhqsmfkxb6zf7eyyqybyqyqpzipdf X-Rspamd-Queue-Id: A87995012D4F X-HE-Tag: 1626321417-698710 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __lock_page_or_retry() to __folio_lock_or_retry(). This actually saves 4 bytes in the only caller of lock_page_or_retry() (due to better register allocation) and saves the 14 byte cost of calling page_folio() in __folio_lock_or_retry() for a total saving of 18 bytes. Also use a bool for the return type. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Reviewed-by: William Kucharski Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 11 +++++++---- mm/filemap.c | 20 +++++++++----------- mm/memory.c | 8 ++++---- 3 files changed, 20 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 03fea8bbfd8e..626dbccbfb90 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -655,7 +655,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -716,13 +716,16 @@ static inline int lock_page_killable(struct page *page) * caller indicated that it can handle a retry. * * Return value and mmap_lock implications depend on flags; see - * __lock_page_or_retry(). + * __folio_lock_or_retry(). */ -static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm, +static inline bool lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags) { + struct folio *folio; might_sleep(); - return trylock_page(page) || __lock_page_or_retry(page, mm, flags); + + folio = page_folio(page); + return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 04fb4a84cf0d..fb6398a532e5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1625,48 +1625,46 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * 1 - page is locked; mmap_lock is still held. - * 0 - page is not locked. + * true - folio is locked; mmap_lock is still held. + * false - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 1 - * with the page locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock unperturbed. */ -int __lock_page_or_retry(struct page *page, struct mm_struct *mm, +bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, unsigned int flags) { - struct folio *folio = page_folio(page); - if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released * even though return 0. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return 0; + return false; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return 0; + return false; } if (flags & FAULT_FLAG_KILLABLE) { - int ret; + bool ret; ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return 0; + return false; } } else { __folio_lock(folio); } - return 1; + return true; } /** diff --git a/mm/memory.c b/mm/memory.c index 747a01d495f2..2f111f9b3dbc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4248,7 +4248,7 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults). * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __folio_lock_or_retry(). * If mmap_lock is released, vma may become invalid (for example * by other thread calling munmap()). */ @@ -4489,7 +4489,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) * concurrent faults). * * The mmap_lock may have been released depending on flags and our return value. - * See filemap_fault() and __lock_page_or_retry(). + * See filemap_fault() and __folio_lock_or_retry(). */ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) { @@ -4593,7 +4593,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __folio_lock_or_retry(). */ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags) @@ -4749,7 +4749,7 @@ static inline void mm_account_fault(struct pt_regs *regs, * By the time we get here, we already hold the mm semaphore * * The mmap_lock may have been released depending on flags and our - * return value. See filemap_fault() and __lock_page_or_retry(). + * return value. See filemap_fault() and __folio_lock_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct pt_regs *regs) From patchwork Thu Jul 15 03:35:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFF8CC1B08C for ; Thu, 15 Jul 2021 03:57:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C3F561289 for ; Thu, 15 Jul 2021 03:57:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C3F561289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D61048D0025; Wed, 14 Jul 2021 23:57:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D37268D000A; Wed, 14 Jul 2021 23:57:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFFF08D0025; Wed, 14 Jul 2021 23:57:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 9F8748D000A for ; Wed, 14 Jul 2021 23:57:40 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 917C1184B83CB for ; Thu, 15 Jul 2021 03:57:39 +0000 (UTC) X-FDA: 78363463038.36.269A9D0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 3AC9720019DC for ; Thu, 15 Jul 2021 03:57:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=P5Zy9I2T86gFM6NThiSEouYBNinlhkbnNiXWyApBcYE=; b=f7dzk7w5iQl1xTSQNOtJShlFzV sb2YPs0tfp5x2PEAZtQ07ZnhUihDVuW7hNg5R4VY0llzRdHzBgwfKjRPyY1USGxEqvAqcPU2SGz44 q/Ykf66Yt5U629E9VhhAvtkRVWw6HGgeVivYEaRwQUHnWi1I/Z9dFs66rJWQpnTIeuifA0tMNL82Q zHGr3b7bFeTKsNOA60ef9qC084AxH8aheA+JkGkDfeodXDNUv0u8Ax0f5Ug49UGEBWUD2Jc1B+yKc y7YmfOoMxZihki0TxZDtN+QQXblqH1YhNG9VzJA+Y4gcNOjblEsVx3IEmxfN1cu21P4gnH96tjbs+ amxE8C3Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sU5-002vNn-Jg; Thu, 15 Jul 2021 03:56:38 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig , "Kirill A . Shutemov" Subject: [PATCH v14 023/138] mm/swap: Add folio_rotate_reclaimable() Date: Thu, 15 Jul 2021 04:35:09 +0100 Message-Id: <20210715033704.692967-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=f7dzk7w5; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3AC9720019DC X-Stat-Signature: kgxsfgotxbhsphj13bbixc1m36mi5cfs X-HE-Tag: 1626321459-890481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert rotate_reclaimable_page() to folio_rotate_reclaimable(). This eliminates all five of the calls to compound_head() in this function, saving 75 bytes at the cost of adding 15 bytes to its one caller, end_page_writeback(). We also save 36 bytes from pagevec_move_tail_fn() due to using folios there. Net 96 bytes savings. Also move its declaration to mm/internal.h as it's only used by filemap.c. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov Acked-by: Mike Rapoport Reviewed-by: David Howells --- include/linux/swap.h | 1 - mm/filemap.c | 3 ++- mm/internal.h | 1 + mm/page_io.c | 4 ++-- mm/swap.c | 30 ++++++++++++++++-------------- 5 files changed, 21 insertions(+), 18 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 3d3d85354026..8394716a002b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -371,7 +371,6 @@ extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); -extern void rotate_reclaimable_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/filemap.c b/mm/filemap.c index fb6398a532e5..4ce2b22b64f8 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1529,8 +1529,9 @@ void end_page_writeback(struct page *page) * ever page writeback. */ if (PageReclaim(page)) { + struct folio *folio = page_folio(page); ClearPageReclaim(page); - rotate_reclaimable_page(page); + folio_rotate_reclaimable(folio); } /* diff --git a/mm/internal.h b/mm/internal.h index 31ff935b2547..1a8851b73031 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -35,6 +35,7 @@ void page_writeback_init(void); vm_fault_t do_swap_page(struct vm_fault *vmf); +void folio_rotate_reclaimable(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/page_io.c b/mm/page_io.c index c493ce9ebcf5..d597bc6e6e45 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -38,7 +38,7 @@ void end_swap_bio_write(struct bio *bio) * Also print a dire warning that things will go BAD (tm) * very quickly. * - * Also clear PG_reclaim to avoid rotate_reclaimable_page() + * Also clear PG_reclaim to avoid folio_rotate_reclaimable() */ set_page_dirty(page); pr_alert_ratelimited("Write-error on swap-device (%u:%u:%llu)\n", @@ -317,7 +317,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc, * temporary failure if the system has limited * memory for allocating transmit buffers. * Mark the page dirty and avoid - * rotate_reclaimable_page but rate-limit the + * folio_rotate_reclaimable but rate-limit the * messages but do not flag PageError like * the normal direct-to-bio case as it could * be temporary. diff --git a/mm/swap.c b/mm/swap.c index 19600430e536..095a5ec6f986 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -228,11 +228,13 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { - if (!PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageActive(page); - add_page_to_lru_list_tail(page, lruvec); - __count_vm_events(PGROTATED, thp_nr_pages(page)); + struct folio *folio = page_folio(page); + + if (!folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + lruvec_add_folio_tail(lruvec, folio); + __count_vm_events(PGROTATED, folio_nr_pages(folio)); } } @@ -249,23 +251,23 @@ static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page) } /* - * Writeback is about to end against a page which has been marked for immediate - * reclaim. If it still appears to be reclaimable, move it to the tail of the - * inactive list. + * Writeback is about to end against a folio which has been marked for + * immediate reclaim. If it still appears to be reclaimable, move it + * to the tail of the inactive list. * - * rotate_reclaimable_page() must disable IRQs, to prevent nasty races. + * folio_rotate_reclaimable() must disable IRQs, to prevent nasty races. */ -void rotate_reclaimable_page(struct page *page) +void folio_rotate_reclaimable(struct folio *folio) { - if (!PageLocked(page) && !PageDirty(page) && - !PageUnevictable(page) && PageLRU(page)) { + if (!folio_test_locked(folio) && !folio_test_dirty(folio) && + !folio_test_unevictable(folio) && folio_test_lru(folio)) { struct pagevec *pvec; unsigned long flags; - get_page(page); + folio_get(folio); local_lock_irqsave(&lru_rotate.lock, flags); pvec = this_cpu_ptr(&lru_rotate.pvec); - if (pagevec_add_and_need_flush(pvec, page)) + if (pagevec_add_and_need_flush(pvec, &folio->page)) pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); local_unlock_irqrestore(&lru_rotate.lock, flags); } From patchwork Thu Jul 15 03:35:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30E64C07E96 for ; Thu, 15 Jul 2021 03:58:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE28C61279 for ; Thu, 15 Jul 2021 03:58:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE28C61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 28A686B00F1; Wed, 14 Jul 2021 23:58:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2395C6B00F2; Wed, 14 Jul 2021 23:58:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DAFE6B00F3; Wed, 14 Jul 2021 23:58:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id D7DF46B00F1 for ; Wed, 14 Jul 2021 23:58:44 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B72FD23E5E for ; Thu, 15 Jul 2021 03:58:43 +0000 (UTC) X-FDA: 78363465726.26.FBA5B35 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 668DC3000101 for ; Thu, 15 Jul 2021 03:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+QyNTsHA0SVbGG1VH5A6VoiB8kow5agwPtQU0LPKsu8=; b=Lk2FnWyRhFAuS4AEqvUD4gpBrU eYm5QtfCpSFHS09hR13esWgp7lTs/nJJpO4oa0UGY3CM5uKbcgQJ9INcRtaxjrIyh+cdy5exlbxVg Pu5166Y+0sAUFQvSPGo5Y4Hf1339ayi3RlZARG2QJhK8vsaMv2H2JNhjaGtdzvrkODJ5+8OiBwT0H JJY+aGCFhBNBJcyH2xdWTiom5kc3W5cFRivuQLHVjsZc0YSvI2PEk0dHMIoSCa3goE4ltAtY89PO0 +/16g6TbIgNrO73EZErcHg5Xr15jyJ9rGZ1UdnIyD+pZg8paXmrMF9Ug48UFbwwuixS1a7D5At/sq kkr4v9aA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sUo-002vQx-Is; Thu, 15 Jul 2021 03:57:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 024/138] mm/filemap: Add folio_end_writeback() Date: Thu, 15 Jul 2021 04:35:10 +0100 Message-Id: <20210715033704.692967-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Lk2FnWyR; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: da1jby4ok49pxo915jbx8et8tfxc7wxu X-Rspamd-Queue-Id: 668DC3000101 X-HE-Tag: 1626321523-184527 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an end_page_writeback() wrapper function for users that are not yet converted to folios. folio_end_writeback() is less than half the size of end_page_writeback() at just 105 bytes compared to 228 bytes, due to removing all the compound_head() calls. The 30 byte wrapper function makes this a net saving of 93 bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 43 ++++++++++++++++++++--------------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 29 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 626dbccbfb90..66a019178550 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -768,7 +768,8 @@ static inline int wait_on_page_locked_killable(struct page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); int wait_on_page_writeback_killable(struct page *page); -extern void end_page_writeback(struct page *page); +void end_page_writeback(struct page *page); +void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); void __set_page_dirty(struct page *, struct address_space *, int warn); diff --git a/mm/filemap.c b/mm/filemap.c index 4ce2b22b64f8..b5a0d546e436 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1175,11 +1175,11 @@ static void wake_up_page_bit(struct page *page, int bit_nr) spin_unlock_irqrestore(&q->lock, flags); } -static void wake_up_page(struct page *page, int bit) +static void folio_wake(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!folio_test_waiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } /* @@ -1516,39 +1516,38 @@ int wait_on_page_private_2_killable(struct page *page) EXPORT_SYMBOL(wait_on_page_private_2_killable); /** - * end_page_writeback - end writeback against a page - * @page: the page + * folio_end_writeback - End writeback against a folio. + * @folio: The folio. */ -void end_page_writeback(struct page *page) +void folio_end_writeback(struct folio *folio) { /* - * TestClearPageReclaim could be used here but it is an atomic - * operation and overkill in this particular case. Failing to - * shuffle a page marked for immediate reclaim is too mild to - * justify taking an atomic operation penalty at the end of - * ever page writeback. + * folio_test_clear_reclaim() could be used here but it is an + * atomic operation and overkill in this particular case. Failing + * to shuffle a folio marked for immediate reclaim is too mild + * a gain to justify taking an atomic operation penalty at the + * end of every folio writeback. */ - if (PageReclaim(page)) { - struct folio *folio = page_folio(page); - ClearPageReclaim(page); + if (folio_test_reclaim(folio)) { + folio_clear_reclaim(folio); folio_rotate_reclaimable(folio); } /* - * Writeback does not hold a page reference of its own, relying + * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. - * But here we must make sure that the page is not freed and - * reused before the wake_up_page(). + * But here we must make sure that the folio is not freed and + * reused before the folio_wake(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + folio_get(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + folio_wake(folio, PG_writeback); + folio_put(folio); } -EXPORT_SYMBOL(end_page_writeback); +EXPORT_SYMBOL(folio_end_writeback); /* * After completing I/O on a page, call this routine to update the page diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 91b3d00a92f7..526843d03d58 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -17,3 +17,9 @@ void unlock_page(struct page *page) return folio_unlock(page_folio(page)); } EXPORT_SYMBOL(unlock_page); + +void end_page_writeback(struct page *page) +{ + return folio_end_writeback(page_folio(page)); +} +EXPORT_SYMBOL(end_page_writeback); From patchwork Thu Jul 15 03:35:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB1BEC07E96 for ; Thu, 15 Jul 2021 03:58:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A0C861289 for ; Thu, 15 Jul 2021 03:58:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A0C861289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E237B8D000A; Wed, 14 Jul 2021 23:58:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DD1D86B00F4; Wed, 14 Jul 2021 23:58:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C72408D000A; Wed, 14 Jul 2021 23:58:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id 9CECD6B00F3 for ; Wed, 14 Jul 2021 23:58:52 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8D7098248047 for ; Thu, 15 Jul 2021 03:58:51 +0000 (UTC) X-FDA: 78363466062.14.E46F614 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 4EE12B0000A7 for ; Thu, 15 Jul 2021 03:58:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IZCiBNqStsWdStEcl45/s7ejWTOvRS1aVZyHPqRW2vE=; b=A6E1QrSqJug+IGJ8zS4Qr/dE9u 8TpYch9KfuQYWux4BAEjp9ZFA+GzyDcwIGtIjWc8ZO7YrHh0Tr1+NKT+ElhuPbtmWzv1LYx2Jcjhg E2ZqPzXrfAIRI2Te/IRPI/duTKewuotyBzFVDD3ioUDra+YKWkqHIY+5msEO6PPwz+smCdCKQLv4r dqYYbc2tGvU8ggpks155F6o+a6fLq/GfSYE+Ee1fHmxM8woCUDsQqw7SMDTMF6Q7x3e+3zgFIHeYm 4wIYblp7OAeqBNoLeeM4wlP7ZLWag3g1W3VGOEkMNyqerrD3gRevDWNVtBQe3t5ezTxe1LF2L2PWi NM1kMSIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sVS-002vW3-LT; Thu, 15 Jul 2021 03:58:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski Subject: [PATCH v14 025/138] mm/writeback: Add folio_wait_writeback() Date: Thu, 15 Jul 2021 04:35:11 +0100 Message-Id: <20210715033704.692967-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=A6E1QrSq; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4EE12B0000A7 X-Stat-Signature: bwp9tdw7o6gwo7x8umukhfydobq1spzr X-HE-Tag: 1626321531-247131 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: wait_on_page_writeback_killable() only has one caller, so convert it to call folio_wait_writeback_killable(). For the wait_on_page_writeback() callers, add a compatibility wrapper around folio_wait_writeback(). Turning PageWriteback() into folio_test_writeback() eliminates a call to compound_head() which saves 8 bytes and 15 bytes in the two functions. Unfortunately, that is more than offset by adding the wait_on_page_writeback compatibility wrapper for a net increase in text of 7 bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Acked-by: Mike Rapoport Reviewed-by: David Howells --- fs/afs/write.c | 9 ++++---- include/linux/pagemap.h | 3 ++- mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 48 ++++++++++++++++++++++++++++------------- 4 files changed, 46 insertions(+), 20 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 3104b62c2082..fb7d5c1cabde 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -839,7 +839,8 @@ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync) */ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) { - struct page *page = thp_head(vmf->page); + struct folio *folio = page_folio(vmf->page); + struct page *page = &folio->page; struct file *file = vmf->vma->vm_file; struct inode *inode = file_inode(file); struct afs_vnode *vnode = AFS_FS_I(inode); @@ -859,7 +860,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) goto out; #endif - if (wait_on_page_writeback_killable(page)) + if (folio_wait_writeback_killable(folio)) goto out; if (lock_page_killable(page) < 0) @@ -869,8 +870,8 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) * details the portion of the page we need to write back and we might * need to redirty the page if there's a problem. */ - if (wait_on_page_writeback_killable(page) < 0) { - unlock_page(page); + if (folio_wait_writeback_killable(folio) < 0) { + folio_unlock(folio); goto out; } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 66a019178550..0c5f53368fe9 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -767,7 +767,8 @@ static inline int wait_on_page_locked_killable(struct page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); -int wait_on_page_writeback_killable(struct page *page); +void folio_wait_writeback(struct folio *folio); +int folio_wait_writeback_killable(struct folio *folio); void end_page_writeback(struct page *page); void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 526843d03d58..41275dac7a92 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -23,3 +23,9 @@ void end_page_writeback(struct page *page) return folio_end_writeback(page_folio(page)); } EXPORT_SYMBOL(end_page_writeback); + +void wait_on_page_writeback(struct page *page) +{ + return folio_wait_writeback(page_folio(page)); +} +EXPORT_SYMBOL_GPL(wait_on_page_writeback); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 9f63548f247c..c2c00e1533ad 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2830,33 +2830,51 @@ int __test_set_page_writeback(struct page *page, bool keep_write) } EXPORT_SYMBOL(__test_set_page_writeback); -/* - * Wait for a page to complete writeback +/** + * folio_wait_writeback - Wait for a folio to finish writeback. + * @folio: The folio to wait for. + * + * If the folio is currently being written back to storage, wait for the + * I/O to complete. + * + * Context: Sleeps. Must be called in process context and with + * no spinlocks held. Caller should hold a reference on the folio. + * If the folio is not locked, writeback may start again after writeback + * has finished. */ -void wait_on_page_writeback(struct page *page) +void folio_wait_writeback(struct folio *folio) { - while (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - wait_on_page_bit(page, PG_writeback); + while (folio_test_writeback(folio)) { + trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + wait_on_page_bit(&folio->page, PG_writeback); } } -EXPORT_SYMBOL_GPL(wait_on_page_writeback); +EXPORT_SYMBOL_GPL(folio_wait_writeback); -/* - * Wait for a page to complete writeback. Returns -EINTR if we get a - * fatal signal while waiting. +/** + * folio_wait_writeback_killable - Wait for a folio to finish writeback. + * @folio: The folio to wait for. + * + * If the folio is currently being written back to storage, wait for the + * I/O to complete or a fatal signal to arrive. + * + * Context: Sleeps. Must be called in process context and with + * no spinlocks held. Caller should hold a reference on the folio. + * If the folio is not locked, writeback may start again after writeback + * has finished. + * Return: 0 on success, -EINTR if we get a fatal signal while waiting. */ -int wait_on_page_writeback_killable(struct page *page) +int folio_wait_writeback_killable(struct folio *folio) { - while (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - if (wait_on_page_bit_killable(page, PG_writeback)) + while (folio_test_writeback(folio)) { + trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + if (wait_on_page_bit_killable(&folio->page, PG_writeback)) return -EINTR; } return 0; } -EXPORT_SYMBOL_GPL(wait_on_page_writeback_killable); +EXPORT_SYMBOL_GPL(folio_wait_writeback_killable); /** * wait_for_stable_page() - wait for writeback to finish, if necessary. From patchwork Thu Jul 15 03:35:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F613C07E96 for ; Thu, 15 Jul 2021 03:59:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1CDDF610C7 for ; Thu, 15 Jul 2021 03:59:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CDDF610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F5718D0026; Wed, 14 Jul 2021 23:59:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CC306B00F6; Wed, 14 Jul 2021 23:59:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 594DF8D0026; Wed, 14 Jul 2021 23:59:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 318EC6B00F5 for ; Wed, 14 Jul 2021 23:59:51 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1806018411DA1 for ; Thu, 15 Jul 2021 03:59:50 +0000 (UTC) X-FDA: 78363468540.25.50ADBD2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id C5DF66001990 for ; Thu, 15 Jul 2021 03:59:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=N4ZTZot+vt+8KjslbnqTsaCIrlOXYH7ezn6cHh84c28=; b=I2NAAnFRjFgCXwxN/BTAkcsRzD g1vSHcTfsgjcdaW8siam6p7pCtU+KzFbu74U7ig1wGrUmsHV16/2T3CMWeAtODAtCqijqd/PcNtor uqM7gr2SubWx0LK3Cl4LH5fqF/oA+C+4kttJZOdsb4evb2zJjJABFZfVQejsXF14kpA/4GID07G3U QnTiAkHQKGyEpxbDrYJukIqRPajHYxIDmyCVCGVmDLYSqCeaPmd33y/+vn7LlrHdU795tdr4QA0El WRU/u4iiv5zR94jpiSEE4cv/0XDaqvr7Z129cf0gaFAQCwmJMPEj4Upj8it/yqvkaxcQAA8esLi3q kw752PxQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sWB-002vYF-Ty; Thu, 15 Jul 2021 03:58:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 026/138] mm/writeback: Add folio_wait_stable() Date: Thu, 15 Jul 2021 04:35:12 +0100 Message-Id: <20210715033704.692967-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I2NAAnFR; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C5DF66001990 X-Stat-Signature: o6cx8dehofzxdenypyo7kmejkiytx6dn X-HE-Tag: 1626321589-147246 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move wait_for_stable_page() into the folio compatibility file. folio_wait_stable() avoids a call to compound_head() and is 14 bytes smaller than wait_for_stable_page() was. The net text size grows by 16 bytes as a result of this patch. We can also remove thp_head() as this was the last user. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells --- include/linux/huge_mm.h | 15 --------------- include/linux/pagemap.h | 1 + mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 24 ++++++++++++++---------- 4 files changed, 21 insertions(+), 25 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f123e15d966e..f280f33ff223 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -250,15 +250,6 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, return NULL; } -/** - * thp_head - Head page of a transparent huge page. - * @page: Any page (tail, head or regular) found in the page cache. - */ -static inline struct page *thp_head(struct page *page) -{ - return compound_head(page); -} - /** * thp_order - Order of a transparent huge page. * @page: Head page of a transparent huge page. @@ -336,12 +327,6 @@ static inline struct list_head *page_deferred_list(struct page *page) #define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; }) #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; }) -static inline struct page *thp_head(struct page *page) -{ - VM_BUG_ON_PGFLAGS(PageTail(page), page); - return page; -} - static inline unsigned int thp_order(struct page *page) { VM_BUG_ON_PGFLAGS(PageTail(page), page); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0c5f53368fe9..96b62a2331fb 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -772,6 +772,7 @@ int folio_wait_writeback_killable(struct folio *folio); void end_page_writeback(struct page *page); void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); +void folio_wait_stable(struct folio *folio); void __set_page_dirty(struct page *, struct address_space *, int warn); int __set_page_dirty_nobuffers(struct page *page); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 41275dac7a92..3c83f03b80d7 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -29,3 +29,9 @@ void wait_on_page_writeback(struct page *page) return folio_wait_writeback(page_folio(page)); } EXPORT_SYMBOL_GPL(wait_on_page_writeback); + +void wait_for_stable_page(struct page *page) +{ + return folio_wait_stable(page_folio(page)); +} +EXPORT_SYMBOL_GPL(wait_for_stable_page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c2c00e1533ad..a078e9786cc4 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2877,17 +2877,21 @@ int folio_wait_writeback_killable(struct folio *folio) EXPORT_SYMBOL_GPL(folio_wait_writeback_killable); /** - * wait_for_stable_page() - wait for writeback to finish, if necessary. - * @page: The page to wait on. + * folio_wait_stable() - wait for writeback to finish, if necessary. + * @folio: The folio to wait on. * - * This function determines if the given page is related to a backing device - * that requires page contents to be held stable during writeback. If so, then - * it will wait for any pending writeback to complete. + * This function determines if the given folio is related to a backing + * device that requires folio contents to be held stable during writeback. + * If so, then it will wait for any pending writeback to complete. + * + * Context: Sleeps. Must be called in process context and with + * no spinlocks held. Caller should hold a reference on the folio. + * If the folio is not locked, writeback may start again after writeback + * has finished. */ -void wait_for_stable_page(struct page *page) +void folio_wait_stable(struct folio *folio) { - page = thp_head(page); - if (page->mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) - wait_on_page_writeback(page); + if (folio->mapping->host->i_sb->s_iflags & SB_I_STABLE_WRITES) + folio_wait_writeback(folio); } -EXPORT_SYMBOL_GPL(wait_for_stable_page); +EXPORT_SYMBOL_GPL(folio_wait_stable); From patchwork Thu Jul 15 03:35:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF555C07E96 for ; Thu, 15 Jul 2021 04:00:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 50F2161278 for ; Thu, 15 Jul 2021 04:00:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50F2161278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A28E76B00F6; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A002D8D0028; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A07E8D0027; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 5F5876B00F6 for ; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4CB6D8248047 for ; Thu, 15 Jul 2021 04:00:16 +0000 (UTC) X-FDA: 78363469632.10.FE6B24B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id E673AD006E35 for ; Thu, 15 Jul 2021 04:00:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=32CNIpiU8NZS3vYPiHzP/+47yNy0tCS/bqcbwVuEEAk=; b=XVS9N2Nk8FhRkoWmFmtin+srdx FxYAJpnjwQdhGTX67K3kyVJKBxQH8UtuW8PcdT+oxzLMy3PHzvs7an/kLf/D5ouHLp9HElY0O9WpU qUEuOSgrZe4yTgZ7gdv9XaEzei9NrDx1zSlurTCv6PKAQgqRPhWVvaRfCPKdTcBJxDcLU9oGszULP BLcZmRN8can73pxQbtGdUNR9UTjjDDoSZnN9ttbaULzLfbhDXSDNjEWi52htWJwfjP9Ceg8mrm1Nn mK8kE0SmPkHtBPOH7Cl0cK6VmpWvOi4TPEvpSaIFvInmb/YJP1XWS2NgMO2gs5q/WctXGcnpZgAgh OMw1RqeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sWY-002vZw-4E; Thu, 15 Jul 2021 03:59:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 027/138] mm/filemap: Add folio_wait_bit() Date: Thu, 15 Jul 2021 04:35:13 +0100 Message-Id: <20210715033704.692967-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E673AD006E35 X-Stat-Signature: beq7b9touz8oai9preqmbhbhjd6p6bd1 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XVS9N2Nk; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321615-221798 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename wait_on_page_bit() to folio_wait_bit(). We must always wait on the folio, otherwise we won't be woken up due to the tail page hashing to a different bucket from the head page. This commit shrinks the kernel by 770 bytes, mostly due to moving the page waitqueue lookup into folio_wait_bit_common(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 10 +++--- mm/filemap.c | 77 +++++++++++++++++++---------------------- mm/page-writeback.c | 4 +-- 3 files changed, 43 insertions(+), 48 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 96b62a2331fb..7eb02baf6f9f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -729,11 +729,11 @@ static inline bool lock_page_or_retry(struct page *page, struct mm_struct *mm, } /* - * This is exported only for wait_on_page_locked/wait_on_page_writeback, etc., + * This is exported only for folio_wait_locked/folio_wait_writeback, etc., * and should not be used directly. */ -extern void wait_on_page_bit(struct page *page, int bit_nr); -extern int wait_on_page_bit_killable(struct page *page, int bit_nr); +void folio_wait_bit(struct folio *folio, int bit_nr); +int folio_wait_bit_killable(struct folio *folio, int bit_nr); /* * Wait for a folio to be unlocked. @@ -745,14 +745,14 @@ extern int wait_on_page_bit_killable(struct page *page, int bit_nr); static inline void folio_wait_locked(struct folio *folio) { if (folio_test_locked(folio)) - wait_on_page_bit(&folio->page, PG_locked); + folio_wait_bit(folio, PG_locked); } static inline int folio_wait_locked_killable(struct folio *folio) { if (!folio_test_locked(folio)) return 0; - return wait_on_page_bit_killable(&folio->page, PG_locked); + return folio_wait_bit_killable(folio, PG_locked); } static inline void wait_on_page_locked(struct page *page) diff --git a/mm/filemap.c b/mm/filemap.c index b5a0d546e436..b55c89d7997f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1102,7 +1102,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, * * So update the flags atomically, and wake up the waiter * afterwards to avoid any races. This store-release pairs - * with the load-acquire in wait_on_page_bit_common(). + * with the load-acquire in folio_wait_bit_common(). */ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN); wake_up_state(wait->private, mode); @@ -1183,7 +1183,7 @@ static void folio_wake(struct folio *folio, int bit) } /* - * A choice of three behaviors for wait_on_page_bit_common(): + * A choice of three behaviors for folio_wait_bit_common(): */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like @@ -1198,16 +1198,16 @@ enum behavior { }; /* - * Attempt to check (or get) the page bit, and mark us done + * Attempt to check (or get) the folio flag, and mark us done * if successful. */ -static inline bool trylock_page_bit_common(struct page *page, int bit_nr, +static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, struct wait_queue_entry *wait) { if (wait->flags & WQ_FLAG_EXCLUSIVE) { - if (test_and_set_bit(bit_nr, &page->flags)) + if (test_and_set_bit(bit_nr, &folio->flags)) return false; - } else if (test_bit(bit_nr, &page->flags)) + } else if (test_bit(bit_nr, &folio->flags)) return false; wait->flags |= WQ_FLAG_WOKEN | WQ_FLAG_DONE; @@ -1217,9 +1217,10 @@ static inline bool trylock_page_bit_common(struct page *page, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; -static inline int wait_on_page_bit_common(wait_queue_head_t *q, - struct page *page, int bit_nr, int state, enum behavior behavior) +static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, + int state, enum behavior behavior) { + wait_queue_head_t *q = page_waitqueue(&folio->page); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1228,8 +1229,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, unsigned long pflags; if (bit_nr == PG_locked && - !PageUptodate(page) && PageWorkingset(page)) { - if (!PageSwapBacked(page)) { + !folio_test_uptodate(folio) && folio_test_workingset(folio)) { + if (!folio_test_swapbacked(folio)) { delayacct_thrashing_start(); delayacct = true; } @@ -1239,7 +1240,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, init_wait(wait); wait->func = wake_page_function; - wait_page.page = page; + wait_page.page = &folio->page; wait_page.bit_nr = bit_nr; repeat: @@ -1254,7 +1255,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * Do one last check whether we can get the * page bit synchronously. * - * Do the SetPageWaiters() marking before that + * Do the folio_set_waiters() marking before that * to let any waker we _just_ missed know they * need to wake us up (otherwise they'll never * even go to the slow case that looks at the @@ -1265,8 +1266,8 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * lock to avoid races. */ spin_lock_irq(&q->lock); - SetPageWaiters(page); - if (!trylock_page_bit_common(page, bit_nr, wait)) + folio_set_waiters(folio); + if (!folio_trylock_flag(folio, bit_nr, wait)) __add_wait_queue_entry_tail(q, wait); spin_unlock_irq(&q->lock); @@ -1276,10 +1277,10 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * see whether the page bit testing has already * been done by the wake function. * - * We can drop our reference to the page. + * We can drop our reference to the folio. */ if (behavior == DROP) - put_page(page); + folio_put(folio); /* * Note that until the "finish_wait()", or until @@ -1316,7 +1317,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, * * And if that fails, we'll have to retry this all. */ - if (unlikely(test_and_set_bit(bit_nr, &page->flags))) + if (unlikely(test_and_set_bit(bit_nr, folio_flags(folio, 0)))) goto repeat; wait->flags |= WQ_FLAG_DONE; @@ -1325,7 +1326,7 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, /* * If a signal happened, this 'finish_wait()' may remove the last - * waiter from the wait-queues, but the PageWaiters bit will remain + * waiter from the wait-queues, but the folio waiters bit will remain * set. That's ok. The next wakeup will take care of it, and trying * to do it here would be difficult and prone to races. */ @@ -1356,19 +1357,17 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } -void wait_on_page_bit(struct page *page, int bit_nr) +void folio_wait_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); + folio_wait_bit_common(folio, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit); +EXPORT_SYMBOL(folio_wait_bit); -int wait_on_page_bit_killable(struct page *page, int bit_nr) +int folio_wait_bit_killable(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); + return folio_wait_bit_common(folio, bit_nr, TASK_KILLABLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit_killable); +EXPORT_SYMBOL(folio_wait_bit_killable); /** * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked @@ -1385,11 +1384,8 @@ EXPORT_SYMBOL(wait_on_page_bit_killable); */ int put_and_wait_on_page_locked(struct page *page, int state) { - wait_queue_head_t *q; - - page = compound_head(page); - q = page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, state, DROP); + return folio_wait_bit_common(page_folio(page), PG_locked, state, + DROP); } /** @@ -1483,9 +1479,10 @@ EXPORT_SYMBOL(end_page_private_2); */ void wait_on_page_private_2(struct page *page) { - page = compound_head(page); - while (PagePrivate2(page)) - wait_on_page_bit(page, PG_private_2); + struct folio *folio = page_folio(page); + + while (folio_test_private_2(folio)) + folio_wait_bit(folio, PG_private_2); } EXPORT_SYMBOL(wait_on_page_private_2); @@ -1502,11 +1499,11 @@ EXPORT_SYMBOL(wait_on_page_private_2); */ int wait_on_page_private_2_killable(struct page *page) { + struct folio *folio = page_folio(page); int ret = 0; - page = compound_head(page); - while (PagePrivate2(page)) { - ret = wait_on_page_bit_killable(page, PG_private_2); + while (folio_test_private_2(folio)) { + ret = folio_wait_bit_killable(folio, PG_private_2); if (ret < 0) break; } @@ -1583,16 +1580,14 @@ EXPORT_SYMBOL_GPL(page_endio); */ void __folio_lock(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, + folio_wait_bit_common(folio, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } EXPORT_SYMBOL(__folio_lock); int __folio_lock_killable(struct folio *folio) { - wait_queue_head_t *q = page_waitqueue(&folio->page); - return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE, + return folio_wait_bit_common(folio, PG_locked, TASK_KILLABLE, EXCLUSIVE); } EXPORT_SYMBOL_GPL(__folio_lock_killable); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index a078e9786cc4..b34278d05395 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2846,7 +2846,7 @@ void folio_wait_writeback(struct folio *folio) { while (folio_test_writeback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - wait_on_page_bit(&folio->page, PG_writeback); + folio_wait_bit(folio, PG_writeback); } } EXPORT_SYMBOL_GPL(folio_wait_writeback); @@ -2868,7 +2868,7 @@ int folio_wait_writeback_killable(struct folio *folio) { while (folio_test_writeback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - if (wait_on_page_bit_killable(&folio->page, PG_writeback)) + if (folio_wait_bit_killable(folio, PG_writeback)) return -EINTR; } From patchwork Thu Jul 15 03:35:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2D5DC47E48 for ; Thu, 15 Jul 2021 04:01:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9299D610C7 for ; Thu, 15 Jul 2021 04:01:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9299D610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E6DD68D0027; Thu, 15 Jul 2021 00:01:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1CC76B00F9; Thu, 15 Jul 2021 00:01:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBDF18D0027; Thu, 15 Jul 2021 00:01:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id A938C6B00F8 for ; Thu, 15 Jul 2021 00:01:33 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9B16217879 for ; Thu, 15 Jul 2021 04:01:32 +0000 (UTC) X-FDA: 78363472824.30.828759F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 3A4091004E62 for ; Thu, 15 Jul 2021 04:01:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PSN9WkEJMgxnxvx0ERIMZ9XtSwSkMfCNyELdamaG3X8=; b=sSanZsUqIYF0IEsFromte2IhP5 v7UWwigspuTBNkvIqFgl+qRldS55g4MSColNQmo4RrMOMEl2Rbn5qTS5a57m/ZXrx3Mqzdjf4sdgU laEF/y5PQ/DGwDex4vrQVrm5KdCn2rRBg1Dg3gK8u9df4d1Vq2KBSR3RbzltxOUb2Mh/6W26ShreJ uRQDpQVIRqslHr9sp2BzF8wbTuKm+sqGx8Nes0Q4xVPCdRinDDO4zLZEp5gQdfT4HlICQvXFITBVG ha5cIqCtPP8myhdeynyRGP2SVjc00cheEe0ZA8K3FrKo5DFrO9MIOTmX8hd7eKhrDHTRAqmEoFClS qZhGaJww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sXH-002vc7-2t; Thu, 15 Jul 2021 03:59:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 028/138] mm/filemap: Add folio_wake_bit() Date: Thu, 15 Jul 2021 04:35:14 +0100 Message-Id: <20210715033704.692967-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sSanZsUq; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ojygp543rbuyr5yyttcag8wmzpfqts1f X-Rspamd-Queue-Id: 3A4091004E62 X-HE-Tag: 1626321690-426696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert wake_up_page_bit() to folio_wake_bit(). All callers have a folio, so use it directly. Saves 66 bytes of text in end_page_private_2(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- mm/filemap.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index b55c89d7997f..a3ef9abcbcde 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1121,14 +1121,14 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, return (flags & WQ_FLAG_EXCLUSIVE) != 0; } -static void wake_up_page_bit(struct page *page, int bit_nr) +static void folio_wake_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(page); + wait_queue_head_t *q = page_waitqueue(&folio->page); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = page; + key.page = &folio->page; key.bit_nr = bit_nr; key.page_match = 0; @@ -1163,7 +1163,7 @@ static void wake_up_page_bit(struct page *page, int bit_nr) * page waiters. */ if (!waitqueue_active(q) || !key.page_match) { - ClearPageWaiters(page); + folio_clear_waiters(folio); /* * It's possible to miss clearing Waiters here, when we woke * our page waiters, but the hashed waitqueue has waiters for @@ -1179,7 +1179,7 @@ static void folio_wake(struct folio *folio, int bit) { if (!folio_test_waiters(folio)) return; - wake_up_page_bit(&folio->page, bit); + folio_wake_bit(folio, bit); } /* @@ -1446,7 +1446,7 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) - wake_up_page_bit(&folio->page, PG_locked); + folio_wake_bit(folio, PG_locked); } EXPORT_SYMBOL(folio_unlock); @@ -1463,11 +1463,12 @@ EXPORT_SYMBOL(folio_unlock); */ void end_page_private_2(struct page *page) { - page = compound_head(page); - VM_BUG_ON_PAGE(!PagePrivate2(page), page); - clear_bit_unlock(PG_private_2, &page->flags); - wake_up_page_bit(page, PG_private_2); - put_page(page); + struct folio *folio = page_folio(page); + + VM_BUG_ON_FOLIO(!folio_test_private_2(folio), folio); + clear_bit_unlock(PG_private_2, folio_flags(folio, 0)); + folio_wake_bit(folio, PG_private_2); + folio_put(folio); } EXPORT_SYMBOL(end_page_private_2); From patchwork Thu Jul 15 03:35:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E614C07E96 for ; Thu, 15 Jul 2021 04:02:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD614610C7 for ; Thu, 15 Jul 2021 04:02:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD614610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 211198D0028; Thu, 15 Jul 2021 00:02:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E7C56B00FA; Thu, 15 Jul 2021 00:02:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0891B8D0028; Thu, 15 Jul 2021 00:02:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id D9C356B00F9 for ; Thu, 15 Jul 2021 00:02:00 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C823E22882 for ; Thu, 15 Jul 2021 04:01:59 +0000 (UTC) X-FDA: 78363473958.19.F741AF0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 5859E7001704 for ; Thu, 15 Jul 2021 04:01:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iWNQ9RPanyCtO83E61CoH+Bq2Widrlq/+PJbQcDDZZ4=; b=IoykdM8/mBJecPspmr616eJ4oy igtNxyhwkrSitiyHCUISAr4b2W+P2kiDzgD9UYOyqfj9+JpxobjK/veI+DXhRPWcb66jAY33K1Ii1 IDvc6uoeqzJms9dXLdsZyhFbFEKPzoWuXM9y7lTSxPwyhPmlUOGv8NXuJWlGpsBoWmo6KRUMVRfVa Axr6U0YHPzKgOlGikyoWT2FcaVQ3j6/eRy76HaOcBwWfTFFFwUTmh2j0WlgmnMc6RbcgMJgGj3DkN qTiT/ILW+IHFdTnptl2XpBgmkBMK4rUOn3kOQCRuXtAAvHZPHuED+Hi3jZvAU9Een/ycWYMS2jrHa HBnidRrg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sYE-002vfE-7q; Thu, 15 Jul 2021 04:01:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 029/138] mm/filemap: Convert page wait queues to be folios Date: Thu, 15 Jul 2021 04:35:15 +0100 Message-Id: <20210715033704.692967-30-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="IoykdM8/"; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 9k8cd56zj9qgghnx3gsqeipcqekn3ar8 X-Rspamd-Queue-Id: 5859E7001704 X-HE-Tag: 1626321719-665346 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reinforce that page flags are actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells --- fs/cachefiles/rdwr.c | 16 ++++++++-------- include/linux/pagemap.h | 8 ++++---- mm/filemap.c | 38 +++++++++++++++++++------------------- 3 files changed, 31 insertions(+), 31 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 8ffc40e84a59..fcf4f3b72923 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -25,20 +25,20 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode, struct cachefiles_object *object; struct fscache_retrieval *op = monitor->op; struct wait_page_key *key = _key; - struct page *page = wait->private; + struct folio *folio = wait->private; ASSERT(key); _enter("{%lu},%u,%d,{%p,%u}", monitor->netfs_page->index, mode, sync, - key->page, key->bit_nr); + key->folio, key->bit_nr); - if (key->page != page || key->bit_nr != PG_locked) + if (key->folio != folio || key->bit_nr != PG_locked) return 0; - _debug("--- monitor %p %lx ---", page, page->flags); + _debug("--- monitor %p %lx ---", folio, folio->flags); - if (!PageUptodate(page) && !PageError(page)) { + if (!folio_test_uptodate(folio) && !folio_test_error(folio)) { /* unlocked, not uptodate and not erronous? */ _debug("page probably truncated"); } @@ -107,7 +107,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object, put_page(backpage2); INIT_LIST_HEAD(&monitor->op_link); - add_page_wait_queue(backpage, &monitor->monitor); + folio_add_wait_queue(page_folio(backpage), &monitor->monitor); if (trylock_page(backpage)) { ret = -EIO; @@ -294,7 +294,7 @@ static int cachefiles_read_backing_file_one(struct cachefiles_object *object, get_page(backpage); monitor->back_page = backpage; monitor->monitor.private = backpage; - add_page_wait_queue(backpage, &monitor->monitor); + folio_add_wait_queue(page_folio(backpage), &monitor->monitor); monitor = NULL; /* but the page may have been read before the monitor was installed, so @@ -548,7 +548,7 @@ static int cachefiles_read_backing_file(struct cachefiles_object *object, get_page(backpage); monitor->back_page = backpage; monitor->monitor.private = backpage; - add_page_wait_queue(backpage, &monitor->monitor); + folio_add_wait_queue(page_folio(backpage), &monitor->monitor); monitor = NULL; /* but the page may have been read before the monitor was diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7eb02baf6f9f..c8e74d67b01f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -629,13 +629,13 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, } struct wait_page_key { - struct page *page; + struct folio *folio; int bit_nr; int page_match; }; struct wait_page_queue { - struct page *page; + struct folio *folio; int bit_nr; wait_queue_entry_t wait; }; @@ -643,7 +643,7 @@ struct wait_page_queue { static inline bool wake_page_match(struct wait_page_queue *wait_page, struct wait_page_key *key) { - if (wait_page->page != key->page) + if (wait_page->folio != key->folio) return false; key->page_match = 1; @@ -803,7 +803,7 @@ int wait_on_page_private_2_killable(struct page *page); /* * Add an arbitrary waiter to a page's wait queue */ -extern void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter); +void folio_add_wait_queue(struct folio *folio, wait_queue_entry_t *waiter); /* * Fault everything in given userspace address range in. diff --git a/mm/filemap.c b/mm/filemap.c index a3ef9abcbcde..1ecaece68019 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1019,11 +1019,11 @@ EXPORT_SYMBOL(__page_cache_alloc); */ #define PAGE_WAIT_TABLE_BITS 8 #define PAGE_WAIT_TABLE_SIZE (1 << PAGE_WAIT_TABLE_BITS) -static wait_queue_head_t page_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; +static wait_queue_head_t folio_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheline_aligned; -static wait_queue_head_t *page_waitqueue(struct page *page) +static wait_queue_head_t *folio_waitqueue(struct folio *folio) { - return &page_wait_table[hash_ptr(page, PAGE_WAIT_TABLE_BITS)]; + return &folio_wait_table[hash_ptr(folio, PAGE_WAIT_TABLE_BITS)]; } void __init pagecache_init(void) @@ -1031,7 +1031,7 @@ void __init pagecache_init(void) int i; for (i = 0; i < PAGE_WAIT_TABLE_SIZE; i++) - init_waitqueue_head(&page_wait_table[i]); + init_waitqueue_head(&folio_wait_table[i]); page_writeback_init(); } @@ -1086,10 +1086,10 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, */ flags = wait->flags; if (flags & WQ_FLAG_EXCLUSIVE) { - if (test_bit(key->bit_nr, &key->page->flags)) + if (test_bit(key->bit_nr, &key->folio->flags)) return -1; if (flags & WQ_FLAG_CUSTOM) { - if (test_and_set_bit(key->bit_nr, &key->page->flags)) + if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; flags |= WQ_FLAG_DONE; } @@ -1123,12 +1123,12 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, static void folio_wake_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; - key.page = &folio->page; + key.folio = folio; key.bit_nr = bit_nr; key.page_match = 0; @@ -1220,7 +1220,7 @@ int sysctl_page_lock_unfairness = 5; static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { - wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_queue_head_t *q = folio_waitqueue(folio); int unfairness = sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1240,7 +1240,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, init_wait(wait); wait->func = wake_page_function; - wait_page.page = &folio->page; + wait_page.folio = folio; wait_page.bit_nr = bit_nr; repeat: @@ -1389,23 +1389,23 @@ int put_and_wait_on_page_locked(struct page *page, int state) } /** - * add_page_wait_queue - Add an arbitrary waiter to a page's wait queue - * @page: Page defining the wait queue of interest + * folio_add_wait_queue - Add an arbitrary waiter to a folio's wait queue + * @folio: Folio defining the wait queue of interest * @waiter: Waiter to add to the queue * - * Add an arbitrary @waiter to the wait queue for the nominated @page. + * Add an arbitrary @waiter to the wait queue for the nominated @folio. */ -void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter) +void folio_add_wait_queue(struct folio *folio, wait_queue_entry_t *waiter) { - wait_queue_head_t *q = page_waitqueue(page); + wait_queue_head_t *q = folio_waitqueue(folio); unsigned long flags; spin_lock_irqsave(&q->lock, flags); __add_wait_queue_entry_tail(q, waiter); - SetPageWaiters(page); + folio_set_waiters(folio); spin_unlock_irqrestore(&q->lock, flags); } -EXPORT_SYMBOL_GPL(add_page_wait_queue); +EXPORT_SYMBOL_GPL(folio_add_wait_queue); #ifndef clear_bit_unlock_is_negative_byte @@ -1595,10 +1595,10 @@ EXPORT_SYMBOL_GPL(__folio_lock_killable); static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) { - struct wait_queue_head *q = page_waitqueue(&folio->page); + struct wait_queue_head *q = folio_waitqueue(folio); int ret = 0; - wait->page = &folio->page; + wait->folio = folio; wait->bit_nr = PG_locked; spin_lock_irq(&q->lock); From patchwork Thu Jul 15 03:35:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16E1FC07E96 for ; Thu, 15 Jul 2021 04:02:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7964610C7 for ; Thu, 15 Jul 2021 04:02:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7964610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 126CB8D0029; Thu, 15 Jul 2021 00:02:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FCD56B00FB; Thu, 15 Jul 2021 00:02:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDF0D8D0029; Thu, 15 Jul 2021 00:02:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id CAC266B00FA for ; Thu, 15 Jul 2021 00:02:42 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BD2A48248047 for ; Thu, 15 Jul 2021 04:02:41 +0000 (UTC) X-FDA: 78363475722.15.309A251 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 79BB890000AE for ; Thu, 15 Jul 2021 04:02:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hntnIK0rh4NoYP7JLuJMNeJXtUgKUH/MeKkNIOw/DoE=; b=BSFiK66t9lDGI8XD4vhlbG6Y48 oNS+eMRM484RMuRqbLpjjZLUgsuSbHYWJpjIxOiIo3fkCxutEBELqAGluL5y3ehTBbdFWZxPG9pwY YwGsiweVSp61gMskh85mEm+KvVVdYsB4k+uB7c9oqRSZESCYGnZQugyf/SHG4db+yHEwnOafYO7LL 5vrAeybiKitVa8SzU09TSMWYFGrOVIyQLCp/EU4nN2n8cPub5aMOt/q3utTceRMKRKuBfQ+YDaLlG ibj9cFfgbg7w5ppH1Z4S4PYNKUt23SZ5ZJBwad6J0hPjh2S9ztxHPFlJ6/1HkOVuR5F7q4S9+bJ7P OHXDyE0A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sYz-002vir-4g; Thu, 15 Jul 2021 04:01:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig , David Howells , "Kirill A . Shutemov" Subject: [PATCH v14 030/138] mm/filemap: Add folio private_2 functions Date: Thu, 15 Jul 2021 04:35:16 +0100 Message-Id: <20210715033704.692967-31-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 79BB890000AE X-Stat-Signature: uuddcashbpetg4r379aiqp7c83f4dznk Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BSFiK66t; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626321761-519395 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: end_page_private_2() becomes folio_end_private_2(), wait_on_page_private_2() becomes folio_wait_private_2() and wait_on_page_private_2_killable() becomes folio_wait_private_2_killable(). Adjust the fscache equivalents to call page_folio() before calling these functions to avoid adding wrappers. Ends up costing 1 byte of text in ceph & netfs, but the core shrinks by three calls to page_folio(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Kirill A. Shutemov --- include/linux/netfs.h | 6 +++--- include/linux/pagemap.h | 6 +++--- mm/filemap.c | 37 ++++++++++++++++--------------------- 3 files changed, 22 insertions(+), 27 deletions(-) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 9062adfa2fb9..fad8c6209edd 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -55,7 +55,7 @@ static inline void set_page_fscache(struct page *page) */ static inline void end_page_fscache(struct page *page) { - end_page_private_2(page); + folio_end_private_2(page_folio(page)); } /** @@ -66,7 +66,7 @@ static inline void end_page_fscache(struct page *page) */ static inline void wait_on_page_fscache(struct page *page) { - wait_on_page_private_2(page); + folio_wait_private_2(page_folio(page)); } /** @@ -82,7 +82,7 @@ static inline void wait_on_page_fscache(struct page *page) */ static inline int wait_on_page_fscache_killable(struct page *page) { - return wait_on_page_private_2_killable(page); + return folio_wait_private_2_killable(page_folio(page)); } enum netfs_read_source { diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c8e74d67b01f..edf58a581bce 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -796,9 +796,9 @@ static inline void set_page_private_2(struct page *page) SetPagePrivate2(page); } -void end_page_private_2(struct page *page); -void wait_on_page_private_2(struct page *page); -int wait_on_page_private_2_killable(struct page *page); +void folio_end_private_2(struct folio *folio); +void folio_wait_private_2(struct folio *folio); +int folio_wait_private_2_killable(struct folio *folio); /* * Add an arbitrary waiter to a page's wait queue diff --git a/mm/filemap.c b/mm/filemap.c index 1ecaece68019..a5d02ec62eb6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1451,56 +1451,51 @@ void folio_unlock(struct folio *folio) EXPORT_SYMBOL(folio_unlock); /** - * end_page_private_2 - Clear PG_private_2 and release any waiters - * @page: The page + * folio_end_private_2 - Clear PG_private_2 and wake any waiters. + * @folio: The folio. * - * Clear the PG_private_2 bit on a page and wake up any sleepers waiting for - * this. The page ref held for PG_private_2 being set is released. + * Clear the PG_private_2 bit on a folio and wake up any sleepers waiting for + * it. The page ref held for PG_private_2 being set is released. * * This is, for example, used when a netfs page is being written to a local * disk cache, thereby allowing writes to the cache for the same page to be * serialised. */ -void end_page_private_2(struct page *page) +void folio_end_private_2(struct folio *folio) { - struct folio *folio = page_folio(page); - VM_BUG_ON_FOLIO(!folio_test_private_2(folio), folio); clear_bit_unlock(PG_private_2, folio_flags(folio, 0)); folio_wake_bit(folio, PG_private_2); folio_put(folio); } -EXPORT_SYMBOL(end_page_private_2); +EXPORT_SYMBOL(folio_end_private_2); /** - * wait_on_page_private_2 - Wait for PG_private_2 to be cleared on a page - * @page: The page to wait on + * folio_wait_private_2 - Wait for PG_private_2 to be cleared on a page. + * @folio: The folio to wait on. * - * Wait for PG_private_2 (aka PG_fscache) to be cleared on a page. + * Wait for PG_private_2 (aka PG_fscache) to be cleared on a folio. */ -void wait_on_page_private_2(struct page *page) +void folio_wait_private_2(struct folio *folio) { - struct folio *folio = page_folio(page); - while (folio_test_private_2(folio)) folio_wait_bit(folio, PG_private_2); } -EXPORT_SYMBOL(wait_on_page_private_2); +EXPORT_SYMBOL(folio_wait_private_2); /** - * wait_on_page_private_2_killable - Wait for PG_private_2 to be cleared on a page - * @page: The page to wait on + * folio_wait_private_2_killable - Wait for PG_private_2 to be cleared on a folio. + * @folio: The folio to wait on. * - * Wait for PG_private_2 (aka PG_fscache) to be cleared on a page or until a + * Wait for PG_private_2 (aka PG_fscache) to be cleared on a folio or until a * fatal signal is received by the calling task. * * Return: * - 0 if successful. * - -EINTR if a fatal signal was encountered. */ -int wait_on_page_private_2_killable(struct page *page) +int folio_wait_private_2_killable(struct folio *folio) { - struct folio *folio = page_folio(page); int ret = 0; while (folio_test_private_2(folio)) { @@ -1511,7 +1506,7 @@ int wait_on_page_private_2_killable(struct page *page) return ret; } -EXPORT_SYMBOL(wait_on_page_private_2_killable); +EXPORT_SYMBOL(folio_wait_private_2_killable); /** * folio_end_writeback - End writeback against a folio. From patchwork Thu Jul 15 03:35:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68869C47E48 for ; Thu, 15 Jul 2021 04:03:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 05E2861279 for ; Thu, 15 Jul 2021 04:03:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 05E2861279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B8638D002A; Thu, 15 Jul 2021 00:03:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58E0F6B00FC; Thu, 15 Jul 2021 00:03:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 408618D002A; Thu, 15 Jul 2021 00:03:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 1F6C76B00FB for ; Thu, 15 Jul 2021 00:03:37 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 12BD722ABD for ; Thu, 15 Jul 2021 04:03:36 +0000 (UTC) X-FDA: 78363478032.39.D473B4E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id B7C4DD0000AC for ; Thu, 15 Jul 2021 04:03:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bKWRwLOIt1ttjlVihKsRpElpi9XSK8qH7EZQbsN6S1c=; b=H2aE0EVGHT4uw8RxOHvzk3pBt2 ciCzSOVduc5JmmzDVCyDGWUDtZvFYZ0yr6PKDSeY4YHhHunOc2G6554wGuxh/4ZQugWZoWcWU0aS5 Whh4D6fHT+1WK6nA5z/bL/mzMX9QEr/Lgu193hZG3r8J8n1Z0RYYFpzxo35RLmRSQv524MUMfPj1T NMpM3fWoXgSYMiLssR54aP4+je/GzsFTAkt8vlc7nj647FZE9bkXcNuO7Wjo3+6aOEuUhXm5ZN03R wSThbds5tIZbK0PF/sHbg8k53/pL7dYXfSrZRVpJgTSVexJ7DGnwOrfYpvDXZWbqHpUwShFcQEo5J +2Hoiq6g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sZZ-002vl2-EU; Thu, 15 Jul 2021 04:02:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig , "Kirill A . Shutemov" Subject: [PATCH v14 031/138] fs/netfs: Add folio fscache functions Date: Thu, 15 Jul 2021 04:35:17 +0100 Message-Id: <20210715033704.692967-32-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=H2aE0EVG; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 97q5114tk4ywdw8idw3drth67zw4qpez X-Rspamd-Queue-Id: B7C4DD0000AC X-HE-Tag: 1626321815-338609 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Match the page writeback functions by adding folio_start_fscache(), folio_end_fscache(), folio_wait_fscache() and folio_wait_fscache_killable(). Remove set_page_private_2(). Also rewrite the kernel-doc to describe when to use the function rather than what the function does, and include the kernel-doc in the appropriate rst file. Saves 31 bytes of text in netfs_rreq_unlock() due to set_page_fscache() calling page_folio() once instead of three times. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov Reported-by: kernel test robot Reported-by: kernel test robot Acked-by: Mike Rapoport Reviewed-by: David Howells --- Documentation/filesystems/netfs_library.rst | 2 + include/linux/netfs.h | 75 +++++++++++++-------- include/linux/pagemap.h | 16 ----- 3 files changed, 50 insertions(+), 43 deletions(-) diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst index 57a641847818..bb68d39f03b7 100644 --- a/Documentation/filesystems/netfs_library.rst +++ b/Documentation/filesystems/netfs_library.rst @@ -524,3 +524,5 @@ Note that these methods are passed a pointer to the cache resource structure, not the read request structure as they could be used in other situations where there isn't a read request structure as well, such as writing dirty data to the cache. + +.. kernel-doc:: include/linux/netfs.h diff --git a/include/linux/netfs.h b/include/linux/netfs.h index fad8c6209edd..e03ed7fc3aef 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -22,6 +22,7 @@ * Overload PG_private_2 to give us PG_fscache - this is used to indicate that * a page is currently backed by a local disk cache */ +#define folio_test_fscache(folio) folio_test_private_2(folio) #define PageFsCache(page) PagePrivate2((page)) #define SetPageFsCache(page) SetPagePrivate2((page)) #define ClearPageFsCache(page) ClearPagePrivate2((page)) @@ -29,57 +30,77 @@ #define TestClearPageFsCache(page) TestClearPagePrivate2((page)) /** - * set_page_fscache - Set PG_fscache on a page and take a ref - * @page: The page. + * folio_start_fscache - Start an fscache write on a folio. + * @folio: The folio. * - * Set the PG_fscache (PG_private_2) flag on a page and take the reference - * needed for the VM to handle its lifetime correctly. This sets the flag and - * takes the reference unconditionally, so care must be taken not to set the - * flag again if it's already set. + * Call this function before writing a folio to a local cache. Starting a + * second write before the first one finishes is not allowed. */ -static inline void set_page_fscache(struct page *page) +static inline void folio_start_fscache(struct folio *folio) { - set_page_private_2(page); + VM_BUG_ON_FOLIO(folio_test_private_2(folio), folio); + folio_get(folio); + folio_set_private_2_flag(folio); } /** - * end_page_fscache - Clear PG_fscache and release any waiters - * @page: The page - * - * Clear the PG_fscache (PG_private_2) bit on a page and wake up any sleepers - * waiting for this. The page ref held for PG_private_2 being set is released. + * folio_end_fscache - End an fscache write on a folio. + * @folio: The folio. * - * This is, for example, used when a netfs page is being written to a local - * disk cache, thereby allowing writes to the cache for the same page to be - * serialised. + * Call this function after the folio has been written to the local cache. + * This will wake any sleepers waiting on this folio. */ -static inline void end_page_fscache(struct page *page) +static inline void folio_end_fscache(struct folio *folio) { - folio_end_private_2(page_folio(page)); + folio_end_private_2(folio); } /** - * wait_on_page_fscache - Wait for PG_fscache to be cleared on a page - * @page: The page to wait on + * folio_wait_fscache - Wait for an fscache write on this folio to end. + * @folio: The folio. * - * Wait for PG_fscache (aka PG_private_2) to be cleared on a page. + * If this folio is currently being written to a local cache, wait for + * the write to finish. Another write may start after this one finishes, + * unless the caller holds the folio lock. */ -static inline void wait_on_page_fscache(struct page *page) +static inline void folio_wait_fscache(struct folio *folio) { - folio_wait_private_2(page_folio(page)); + folio_wait_private_2(folio); } /** - * wait_on_page_fscache_killable - Wait for PG_fscache to be cleared on a page - * @page: The page to wait on + * folio_wait_fscache_killable - Wait for an fscache write on this folio to end. + * @folio: The folio. * - * Wait for PG_fscache (aka PG_private_2) to be cleared on a page or until a - * fatal signal is received by the calling task. + * If this folio is currently being written to a local cache, wait + * for the write to finish or for a fatal signal to be received. + * Another write may start after this one finishes, unless the caller + * holds the folio lock. * * Return: * - 0 if successful. * - -EINTR if a fatal signal was encountered. */ +static inline int folio_wait_fscache_killable(struct folio *folio) +{ + return folio_wait_private_2_killable(folio); +} + +static inline void set_page_fscache(struct page *page) +{ + folio_start_fscache(page_folio(page)); +} + +static inline void end_page_fscache(struct page *page) +{ + folio_end_private_2(page_folio(page)); +} + +static inline void wait_on_page_fscache(struct page *page) +{ + folio_wait_private_2(page_folio(page)); +} + static inline int wait_on_page_fscache_killable(struct page *page) { return folio_wait_private_2_killable(page_folio(page)); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index edf58a581bce..08f40e004d97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -780,22 +780,6 @@ int __set_page_dirty_no_writeback(struct page *page); void page_endio(struct page *page, bool is_write, int err); -/** - * set_page_private_2 - Set PG_private_2 on a page and take a ref - * @page: The page. - * - * Set the PG_private_2 flag on a page and take the reference needed for the VM - * to handle its lifetime correctly. This sets the flag and takes the - * reference unconditionally, so care must be taken not to set the flag again - * if it's already set. - */ -static inline void set_page_private_2(struct page *page) -{ - page = compound_head(page); - get_page(page); - SetPagePrivate2(page); -} - void folio_end_private_2(struct folio *folio); void folio_wait_private_2(struct folio *folio); int folio_wait_private_2_killable(struct folio *folio); From patchwork Thu Jul 15 03:35:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E390C07E96 for ; Thu, 15 Jul 2021 04:04:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3375F610C7 for ; Thu, 15 Jul 2021 04:04:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3375F610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 875B78D002B; Thu, 15 Jul 2021 00:04:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 826656B00FD; Thu, 15 Jul 2021 00:04:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C6D98D002B; Thu, 15 Jul 2021 00:04:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 4B6216B00FC for ; Thu, 15 Jul 2021 00:04:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3D1D082499B9 for ; Thu, 15 Jul 2021 04:04:16 +0000 (UTC) X-FDA: 78363479712.25.EC0B058 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id E747FF000097 for ; Thu, 15 Jul 2021 04:04:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DsXeC5XdMN2GjsT/d/z6VoHbwVOhzpi66MJ1omvf8Uk=; b=hg44rZIh4hZrPqzvGWOX9VqItF yP93TzpK+MSy9WNugWotTGxWzVePmwe1VyilhMuy1jHp2LRgcDdBem6CMxAE5O3yU68zKVQCihnV0 rLvHebpZHDO57hCVekJxUtSDULLSMUuowXFeSAabNfasXA3QLXBnTMBD6Ruqq3qQ5N3Qd5+5AKnfy /e36ief9/oRuFq4o9TpaR9mS7fyWw6d+3KeiG+p/IMqKlOlIJ/KFXSsZiUrL5xIEacoEB1fhxFkS5 3WpzP9D/J1VHZXMupG6AQCr0Om2eQE1/kVwlz+/CcUPh6oZTTpaEUh8BgLw5GukZTYrq515wEv0hc N52dRufQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3saN-002voT-HI; Thu, 15 Jul 2021 04:03:18 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig , David Howells , "Kirill A . Shutemov" Subject: [PATCH v14 032/138] mm: Add folio_mapped() Date: Thu, 15 Jul 2021 04:35:18 +0100 Message-Id: <20210715033704.692967-33-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E747FF000097 X-Stat-Signature: ttwp5mk3h3fxsoxi5jk9y5wue53wohzb Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hg44rZIh; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626321855-213705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function is the equivalent of page_mapped(). It is slightly shorter as we do not need to handle the PageTail() case. Reimplement page_mapped() as a wrapper around folio_mapped(). folio_mapped() is 13 bytes smaller than page_mapped(), but the page_mapped() wrapper is 30 bytes, for a net increase of 17 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- include/linux/mm.h | 1 + include/linux/mm_types.h | 6 ++++++ mm/folio-compat.c | 6 ++++++ mm/util.c | 29 ++++++++++++++++------------- 4 files changed, 29 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9d28f5b2e983..8b79d9dfa6cb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1767,6 +1767,7 @@ static inline pgoff_t page_index(struct page *page) } bool page_mapped(struct page *page); +bool folio_mapped(struct folio *folio); /* * Return true only if the page has been allocated with diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c4dd41bb1019..f763aa273d82 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -291,6 +291,12 @@ FOLIO_MATCH(memcg_data, memcg_data); #endif #undef FOLIO_MATCH +static inline atomic_t *folio_mapcount_ptr(struct folio *folio) +{ + struct page *tail = &folio->page + 1; + return &tail->compound_mapcount; +} + static inline atomic_t *compound_mapcount_ptr(struct page *page) { return &page[1].compound_mapcount; diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 3c83f03b80d7..7044fcc8a8aa 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -35,3 +35,9 @@ void wait_for_stable_page(struct page *page) return folio_wait_stable(page_folio(page)); } EXPORT_SYMBOL_GPL(wait_for_stable_page); + +bool page_mapped(struct page *page) +{ + return folio_mapped(page_folio(page)); +} +EXPORT_SYMBOL(page_mapped); diff --git a/mm/util.c b/mm/util.c index 1cde6218d6d1..e8c12350b3eb 100644 --- a/mm/util.c +++ b/mm/util.c @@ -652,28 +652,31 @@ void *page_rmapping(struct page *page) return __page_rmapping(page); } -/* - * Return true if this page is mapped into pagetables. - * For compound page it returns true if any subpage of compound page is mapped. +/** + * folio_mapped - Is this folio mapped into userspace? + * @folio: The folio. + * + * Return: True if any page in this folio is referenced by user page tables. */ -bool page_mapped(struct page *page) +bool folio_mapped(struct folio *folio) { - int i; + int i, nr; - if (likely(!PageCompound(page))) - return atomic_read(&page->_mapcount) >= 0; - page = compound_head(page); - if (atomic_read(compound_mapcount_ptr(page)) >= 0) + if (folio_single(folio)) + return atomic_read(&folio->_mapcount) >= 0; + if (atomic_read(folio_mapcount_ptr(folio)) >= 0) return true; - if (PageHuge(page)) + if (folio_test_hugetlb(folio)) return false; - for (i = 0; i < compound_nr(page); i++) { - if (atomic_read(&page[i]._mapcount) >= 0) + + nr = folio_nr_pages(folio); + for (i = 0; i < nr; i++) { + if (atomic_read(&folio_page(folio, i)->_mapcount) >= 0) return true; } return false; } -EXPORT_SYMBOL(page_mapped); +EXPORT_SYMBOL(folio_mapped); struct anon_vma *page_anon_vma(struct page *page) { From patchwork Thu Jul 15 03:35:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F34AFC07E96 for ; Thu, 15 Jul 2021 04:06:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC4FC61279 for ; Thu, 15 Jul 2021 04:06:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC4FC61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1BD066B0101; Thu, 15 Jul 2021 00:06:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16CC88D002C; Thu, 15 Jul 2021 00:06:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00D366B0103; Thu, 15 Jul 2021 00:06:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id CA6A06B0101 for ; Thu, 15 Jul 2021 00:06:19 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C1DDF144FB for ; Thu, 15 Jul 2021 04:06:18 +0000 (UTC) X-FDA: 78363484836.34.0BFE754 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 75CD7E001809 for ; Thu, 15 Jul 2021 04:05:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SUn1Nx+JnnxbWTWC1UlGFubzbFx2wwJSqJVAFW18gAg=; b=dqGGGpaF6YqsuVb72MEP40KLqW PTV6rQEbhgbSkcKFzwPuglo0qlOyunrczrSisHOAQEOPipb5XtsEV+TLjs7QV3+6ff447m89+m6xN /eQjUmlUWGxauUsETHDk89eg1QBX1ktubzFcoJkZb9G2xajMNsj1q9KDKFMHZnhyQhYpVkseYhf/P MyAauXzj3yANJWpFBfRe5ElPJN402no+I6MbYVtnIv6ss1vmn3URMZBWFF2AZ5zCZHoujw+HHSOH7 vp6yL8QnOvmmyScmxzFQz4JbPkG0Lcu4NHT8DgwHTrgMjG0XiL0EUaFhwEnLirmEqElHP2WzrMcNB ygGtq2PQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sb6-002vs0-Pl; Thu, 15 Jul 2021 04:03:53 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 033/138] mm: Add folio_nid() Date: Thu, 15 Jul 2021 04:35:19 +0100 Message-Id: <20210715033704.692967-34-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dqGGGpaF; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 75CD7E001809 X-Stat-Signature: 4xghno9s4rc1hq6ug78hck8fipt63otx X-HE-Tag: 1626321918-115617 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_to_nid(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8b79d9dfa6cb..c6e2a1682a6d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1428,6 +1428,11 @@ static inline int page_to_nid(const struct page *page) } #endif +static inline int folio_nid(const struct folio *folio) +{ + return page_to_nid(&folio->page); +} + #ifdef CONFIG_NUMA_BALANCING static inline int cpu_pid_to_cpupid(int cpu, int pid) { From patchwork Thu Jul 15 03:35:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 590D4C47E48 for ; Thu, 15 Jul 2021 04:06:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0173E61279 for ; Thu, 15 Jul 2021 04:05:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0173E61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4A3326B00FD; Thu, 15 Jul 2021 00:06:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42BBE8D002C; Thu, 15 Jul 2021 00:06:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A5FB6B00FF; Thu, 15 Jul 2021 00:06:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id F3EF76B00FD for ; Thu, 15 Jul 2021 00:05:59 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E1905184A2 for ; Thu, 15 Jul 2021 04:05:58 +0000 (UTC) X-FDA: 78363483996.31.08ED334 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 9B1FCD0000A9 for ; Thu, 15 Jul 2021 04:05:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Hzd335Ag+ayCqdQrtUoVggNp2H/a7Ca0++VzF2umYOo=; b=ijiZPSJgjj3cmvp+8V5rwBMe1L UR/J+HoVUaBIskPvahiVSxkXLXxiYC0mfbt/1FGQ/6rdNLhybFg7v5rP5J6UfQp7Ub7ceuZPLpmcs Cd55ds2y3tWdM/sfnuNch+YlCemr0qG5zb+C5Xj2rparWUIQwLUGgCPF+zVYDItOhLc+Lv1CUoU8m QtrQVNlIFp8Qjj7eaXHkgpU201TIclCd5sP6cfNrv6p2p64V39RQzsW9ivrpy0LzZVzPq47FEJbn2 dlW2u77KAIb5vvShPDma40nvPza3AkUrkl451/xfqWkPtHBtZKbiJAYCaAJwrA/IDUqUvTx7sozuJ XLQwMyAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sbo-002vuY-C1; Thu, 15 Jul 2021 04:04:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Michal Hocko , Johannes Weiner Subject: [PATCH v14 034/138] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Date: Thu, 15 Jul 2021 04:35:20 +0100 Message-Id: <20210715033704.692967-35-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9B1FCD0000A9 X-Stat-Signature: zixhgfgqhrfcz75e66gc5p31359wzexh Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ijiZPSJg; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321958-200274 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The last use of 'page' was removed by commit 468c398233da ("mm: memcontrol: switch to native NR_ANON_THPS counter"), so we can now remove the parameter from the function. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ae1f5d0cb581..ee892daecb8b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -831,7 +831,6 @@ static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event) } static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, - struct page *page, int nr_pages) { /* pagein of a big page is an event. So, ignore page size */ @@ -5692,9 +5691,9 @@ static int mem_cgroup_move_account(struct page *page, ret = 0; local_irq_disable(); - mem_cgroup_charge_statistics(to, page, nr_pages); + mem_cgroup_charge_statistics(to, nr_pages); memcg_check_events(to, page); - mem_cgroup_charge_statistics(from, page, -nr_pages); + mem_cgroup_charge_statistics(from, -nr_pages); memcg_check_events(from, page); local_irq_enable(); out_unlock: @@ -6715,7 +6714,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, commit_charge(page, memcg); local_irq_disable(); - mem_cgroup_charge_statistics(memcg, page, nr_pages); + mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, page); local_irq_enable(); out: @@ -7006,7 +7005,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) commit_charge(newpage, memcg); local_irq_save(flags); - mem_cgroup_charge_statistics(memcg, newpage, nr_pages); + mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); } @@ -7236,7 +7235,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) * only synchronisation we have for updating the per-CPU variables. */ VM_BUG_ON(!irqs_disabled()); - mem_cgroup_charge_statistics(memcg, page, -nr_entries); + mem_cgroup_charge_statistics(memcg, -nr_entries); memcg_check_events(memcg, page); css_put(&memcg->css); From patchwork Thu Jul 15 03:35:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5A5C07E96 for ; Thu, 15 Jul 2021 04:06:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B63B610C7 for ; Thu, 15 Jul 2021 04:06:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B63B610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B33C16B00FF; Thu, 15 Jul 2021 00:06:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AE2AE6B0100; Thu, 15 Jul 2021 00:06:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9835C8D002C; Thu, 15 Jul 2021 00:06:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 6A8546B00FF for ; Thu, 15 Jul 2021 00:06:10 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5A9FD8248047 for ; Thu, 15 Jul 2021 04:06:09 +0000 (UTC) X-FDA: 78363484458.40.9392275 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 0F00230000A9 for ; Thu, 15 Jul 2021 04:06:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NrDQtj4djFfXw8xNxP1weUzjA286Ja1TcVGDP+T/Mwo=; b=DR2dn0MqW0gdeZP6hP+Nd8fMui s2HoysDWMlffyZAvzibzeDvZBfqR9qoQy/MIp5G3sv8iNbbVQWcqwIbnLGp+GPw0xrHyzMdRvAdea oQ6qa/+i6tf/g2/Mx/epief/yVxWNMgrJ5kVFzJUNJkYFcPB2HhQmNoRxzRglGhExhNZNm4IuAvNa YcSjLrtcJVNNsRm3ectBmMxoJyAdp2K2hIW4yloWDWZag9hSjvd8fj8c5qGrWkZzPQXc5AN/PfkD0 ajSM9/NNfeXVkMx2qluKHltxJuJndxPJNpbozqXcLUH7Z5PzAUQQWPplOqnNFHmS8Lu4l9tqlWZXm 472IJbEA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sca-002vy0-Hi; Thu, 15 Jul 2021 04:05:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Michal Hocko , Johannes Weiner , Christoph Hellwig Subject: [PATCH v14 035/138] mm/memcg: Use the node id in mem_cgroup_update_tree() Date: Thu, 15 Jul 2021 04:35:21 +0100 Message-Id: <20210715033704.692967-36-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0F00230000A9 X-Stat-Signature: ymymbbfsugdqwgs7oxzkq71rgc5tbych Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DR2dn0Mq; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321968-788559 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: By using the node id in mem_cgroup_update_tree(), we can delete soft_limit_tree_from_page() and mem_cgroup_page_nodeinfo(). Saves 42 bytes of kernel text on my config. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ee892daecb8b..d57ff5c5d330 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -451,28 +451,12 @@ ino_t page_cgroup_ino(struct page *page) return ino; } -static struct mem_cgroup_per_node * -mem_cgroup_page_nodeinfo(struct mem_cgroup *memcg, struct page *page) -{ - int nid = page_to_nid(page); - - return memcg->nodeinfo[nid]; -} - static struct mem_cgroup_tree_per_node * soft_limit_tree_node(int nid) { return soft_limit_tree.rb_tree_per_node[nid]; } -static struct mem_cgroup_tree_per_node * -soft_limit_tree_from_page(struct page *page) -{ - int nid = page_to_nid(page); - - return soft_limit_tree.rb_tree_per_node[nid]; -} - static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz, struct mem_cgroup_tree_per_node *mctz, unsigned long new_usage_in_excess) @@ -543,13 +527,13 @@ static unsigned long soft_limit_excess(struct mem_cgroup *memcg) return excess; } -static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) +static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid) { unsigned long excess; struct mem_cgroup_per_node *mz; struct mem_cgroup_tree_per_node *mctz; - mctz = soft_limit_tree_from_page(page); + mctz = soft_limit_tree_node(nid); if (!mctz) return; /* @@ -557,7 +541,7 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) * because their event counter is not touched. */ for (; memcg; memcg = parent_mem_cgroup(memcg)) { - mz = mem_cgroup_page_nodeinfo(memcg, page); + mz = memcg->nodeinfo[nid]; excess = soft_limit_excess(memcg); /* * We have to update the tree if mz is on RB-tree or @@ -884,7 +868,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) MEM_CGROUP_TARGET_SOFTLIMIT); mem_cgroup_threshold(memcg); if (unlikely(do_softlimit)) - mem_cgroup_update_tree(memcg, page); + mem_cgroup_update_tree(memcg, page_to_nid(page)); } } From patchwork Thu Jul 15 03:35:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2700FC1B08C for ; Thu, 15 Jul 2021 04:07:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B71DB61279 for ; Thu, 15 Jul 2021 04:07:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B71DB61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 16D2B6B0103; Thu, 15 Jul 2021 00:07:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 144C68D002C; Thu, 15 Jul 2021 00:07:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F27586B0105; Thu, 15 Jul 2021 00:07:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id C6E246B0103 for ; Thu, 15 Jul 2021 00:07:20 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BF73522BE9 for ; Thu, 15 Jul 2021 04:07:19 +0000 (UTC) X-FDA: 78363487398.31.B02C50A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 7E9AC3000187 for ; Thu, 15 Jul 2021 04:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZszZ6tgqEqvbn4yMeCdI0iO4PNst4fp1/dzYOIQ1JeY=; b=wPmYhWokc0h/1jnZH2rdEpIe5+ sTk9Y6WM2M+h0b3SId3ix0NQhr40fpWL05ij7pr7ZqBI1RNqoEcjCiaX8EmGEOXFBy5n+47n1Oz1d rKneLvrq185zq67z+0wIEhpPL33ForXtfb0gVCyzHqW6dreZErrjjbXJdV3sUUg4xSgEsAfkJRBtC w/GGubJW8pwefwkZd9GpGLt9ih8CtFMeq/2WuSYbh4RHg48Cm/Z8sH+OyX96EBNGVRei9LUiErWW6 AqbY2LNfpo2OYzAIn6IDMhOKvN8Sa7Vs8u5M7ZnjxeMaND1WC901zIIKl1iYMarodvinheZLDVUEH 0rOsKWOQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sd9-002w0Q-Aa; Thu, 15 Jul 2021 04:05:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Michal Hocko , Johannes Weiner , Christoph Hellwig Subject: [PATCH v14 036/138] mm/memcg: Remove soft_limit_tree_node() Date: Thu, 15 Jul 2021 04:35:22 +0100 Message-Id: <20210715033704.692967-37-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wPmYhWok; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: tw7ihqfkjjustuxzxtodnfxcxqp5e4r9 X-Rspamd-Queue-Id: 7E9AC3000187 X-HE-Tag: 1626322039-554526 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Opencode this one-line function in its three callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Acked-by: Johannes Weiner Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d57ff5c5d330..f70e33d691aa 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -451,12 +451,6 @@ ino_t page_cgroup_ino(struct page *page) return ino; } -static struct mem_cgroup_tree_per_node * -soft_limit_tree_node(int nid) -{ - return soft_limit_tree.rb_tree_per_node[nid]; -} - static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz, struct mem_cgroup_tree_per_node *mctz, unsigned long new_usage_in_excess) @@ -533,7 +527,7 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid) struct mem_cgroup_per_node *mz; struct mem_cgroup_tree_per_node *mctz; - mctz = soft_limit_tree_node(nid); + mctz = soft_limit_tree.rb_tree_per_node[nid]; if (!mctz) return; /* @@ -572,7 +566,7 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg) for_each_node(nid) { mz = memcg->nodeinfo[nid]; - mctz = soft_limit_tree_node(nid); + mctz = soft_limit_tree.rb_tree_per_node[nid]; if (mctz) mem_cgroup_remove_exceeded(mz, mctz); } @@ -3420,7 +3414,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, if (order > 0) return 0; - mctz = soft_limit_tree_node(pgdat->node_id); + mctz = soft_limit_tree.rb_tree_per_node[pgdat->node_id]; /* * Do not even bother to check the largest node if the root From patchwork Thu Jul 15 03:35:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 509F7C07E96 for ; Thu, 15 Jul 2021 04:08:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D824D610C7 for ; Thu, 15 Jul 2021 04:08:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D824D610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A4D26B0105; Thu, 15 Jul 2021 00:08:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 254428D002C; Thu, 15 Jul 2021 00:08:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F5836B0107; Thu, 15 Jul 2021 00:08:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id DCF056B0105 for ; Thu, 15 Jul 2021 00:08:08 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D88188248047 for ; Thu, 15 Jul 2021 04:08:07 +0000 (UTC) X-FDA: 78363489414.20.93D286F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 92275F00008E for ; Thu, 15 Jul 2021 04:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CxY08Ev5uWlF/WW7yp4XrHQ+Z+LsImiejgZH8Eqdor0=; b=RKrbLkVFivA1/kX9kMTQh7qVP/ fmpA4Vbu25qCarcS9FXuAGD5ITwznDdH3orb2GMY//TAMSbFDvs2OaNRb1cP03tUhBdOakxUwe0m6 AJc9UArs3jC9rRErAgTuyC2aGXZ6jZSozojNQkHmMemtq9R4CwA1UOWYgWaNJuUGJeJAtdUgOso5i faUSnnS0gTDIkO6EjGzYA4dwd4Dl+HSbh55CQTCMUQmzknXDYzRHlwkY9PblC5O9HS//VXm0QALA3 lX7VbWjGkXfEK7N9Am5bJMegqBsWJebTIXpy4EnLXcZFcvmQS17cfmikf51E0/tI3+JUW6S8EdmKO 2JXxHlqw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sdc-002w2d-Iw; Thu, 15 Jul 2021 04:06:35 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Michal Hocko , Christoph Hellwig Subject: [PATCH v14 037/138] mm/memcg: Convert memcg_check_events to take a node ID Date: Thu, 15 Jul 2021 04:35:23 +0100 Message-Id: <20210715033704.692967-38-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RKrbLkVF; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: uhyudkunqcwqm7i36jfu4kx6dd931bcq X-Rspamd-Queue-Id: 92275F00008E X-Rspamd-Server: rspam01 X-HE-Tag: 1626322087-772396 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memcg_check_events only uses the page's nid, so call page_to_nid in the callers to make the interface easier to understand. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f70e33d691aa..1a049bfa0e0a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -851,7 +851,7 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, * Check events in order. * */ -static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) +static void memcg_check_events(struct mem_cgroup *memcg, int nid) { /* threshold event is triggered in finer grain than soft limit */ if (unlikely(mem_cgroup_event_ratelimit(memcg, @@ -862,7 +862,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) MEM_CGROUP_TARGET_SOFTLIMIT); mem_cgroup_threshold(memcg); if (unlikely(do_softlimit)) - mem_cgroup_update_tree(memcg, page_to_nid(page)); + mem_cgroup_update_tree(memcg, nid); } } @@ -5578,7 +5578,7 @@ static int mem_cgroup_move_account(struct page *page, struct lruvec *from_vec, *to_vec; struct pglist_data *pgdat; unsigned int nr_pages = compound ? thp_nr_pages(page) : 1; - int ret; + int nid, ret; VM_BUG_ON(from == to); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -5667,12 +5667,13 @@ static int mem_cgroup_move_account(struct page *page, __unlock_page_memcg(from); ret = 0; + nid = page_to_nid(page); local_irq_disable(); mem_cgroup_charge_statistics(to, nr_pages); - memcg_check_events(to, page); + memcg_check_events(to, nid); mem_cgroup_charge_statistics(from, -nr_pages); - memcg_check_events(from, page); + memcg_check_events(from, nid); local_irq_enable(); out_unlock: unlock_page(page); @@ -6693,7 +6694,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page); + memcg_check_events(memcg, page_to_nid(page)); local_irq_enable(); out: return ret; @@ -6801,7 +6802,7 @@ struct uncharge_gather { unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; - struct page *dummy_page; + int nid; }; static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6825,7 +6826,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) local_irq_save(flags); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->dummy_page); + memcg_check_events(ug->memcg, ug->nid); local_irq_restore(flags); /* drop reference from uncharge_page */ @@ -6866,7 +6867,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) uncharge_gather_clear(ug); } ug->memcg = memcg; - ug->dummy_page = page; + ug->nid = page_to_nid(page); /* pairs with css_put in uncharge_batch */ css_get(&memcg->css); @@ -6984,7 +6985,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, newpage); + memcg_check_events(memcg, page_to_nid(newpage)); local_irq_restore(flags); } @@ -7214,7 +7215,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) */ VM_BUG_ON(!irqs_disabled()); mem_cgroup_charge_statistics(memcg, -nr_entries); - memcg_check_events(memcg, page); + memcg_check_events(memcg, page_to_nid(page)); css_put(&memcg->css); } From patchwork Thu Jul 15 03:35:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC60C07E96 for ; Thu, 15 Jul 2021 04:09:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 34FEF610C7 for ; Thu, 15 Jul 2021 04:09:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 34FEF610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BEA96B0107; Thu, 15 Jul 2021 00:09:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 895136B0108; Thu, 15 Jul 2021 00:09:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75C258D002C; Thu, 15 Jul 2021 00:09:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 4AC7B6B0107 for ; Thu, 15 Jul 2021 00:09:13 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 378C21802EF3A for ; Thu, 15 Jul 2021 04:09:12 +0000 (UTC) X-FDA: 78363492144.39.7D70173 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 9C96F30001A1 for ; Thu, 15 Jul 2021 04:09:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1oPzmTcWDwHQdw75PYolhuv7fLZWOVFGSl3QsN5aXeg=; b=HGNxad1sGeexx4qEp/zR9kK6+h n0N26OveaAllKf3TtUG/Dsx7csnvQ8/rA6d1L91YcANlF+MfHhpsIOJXnI5zo7KxVg1eQfd/EumsU bmcLpjyk1jlkN4aPm42lTIm2PC5Ark9/3IwhGMl+c2zGhSLacfoHT5FCNNGJT2PabXp7QiYbQ+reA ZBc5GfZeo44aIngR/G+JBU8KVbXsITijVFNqDIwz0QKq4BTaf1clxcjBmjRny/KdnB4KUTdNgJ0IK bX0qGuC4o3OE/v7gD/SpgPXuf7xCWJgUP+PmLkhBgiG/2Wj1L1q6BoXGmXRddCiWKzomHTzskuH1x SQdSdCiA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3ses-002wA4-BA; Thu, 15 Jul 2021 04:07:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 038/138] mm/memcg: Add folio_memcg() and related functions Date: Thu, 15 Jul 2021 04:35:24 +0100 Message-Id: <20210715033704.692967-39-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9C96F30001A1 X-Stat-Signature: t9dgeng5z1wb16145yokuxfdd3angs7u Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HGNxad1s; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626322151-925776 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memcg information is only stored in the head page, so the memcg subsystem needs to assure that all accesses are to the head page. The first step is converting page_memcg() to folio_memcg(). The callers of page_memcg() and PageMemcgKmem() are not yet ready to be converted to use folios, so retain them as wrappers around folio_memcg() and folio_memcg_kmem(). They will be converted in a later patch set. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 109 ++++++++++++++++++++++--------------- mm/memcontrol.c | 21 ++++--- 2 files changed, 77 insertions(+), 53 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index bfe5c486f4ad..eabae5874161 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -372,6 +372,7 @@ enum page_memcg_data_flags { #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) static inline bool PageMemcgKmem(struct page *page); +static inline bool folio_memcg_kmem(struct folio *folio); /* * After the initialization objcg->memcg is always pointing at @@ -386,73 +387,77 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) } /* - * __page_memcg - get the memory cgroup associated with a non-kmem page - * @page: a pointer to the page struct + * __folio_memcg - Get the memory cgroup associated with a non-kmem folio + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * kmem pages. + * against some type of folios, e.g. slab folios or ex-slab folios or + * kmem folios. */ -static inline struct mem_cgroup *__page_memcg(struct page *page) +static inline struct mem_cgroup *__folio_memcg(struct folio *folio) { - unsigned long memcg_data = page->memcg_data; + unsigned long memcg_data = folio->memcg_data; - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * __page_objcg - get the object cgroup associated with a kmem page - * @page: a pointer to the page struct + * __folio_objcg - get the object cgroup associated with a kmem folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the object cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the object cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper object cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages or - * LRU pages. + * against some type of folios, e.g. slab folios or ex-slab folios or + * LRU folios. */ -static inline struct obj_cgroup *__page_objcg(struct page *page) +static inline struct obj_cgroup *__folio_objcg(struct folio *folio) { - unsigned long memcg_data = page->memcg_data; + unsigned long memcg_data = folio->memcg_data; - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); + VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } /* - * page_memcg - get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg - Get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem page any of the following ensures page and memcg binding + * For a non-kmem folio any of the following ensures folio and memcg binding * stability: * - * - the page lock + * - the folio lock * - LRU isolation * - lock_page_memcg() * - exclusive reference * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * For a kmem folio a caller should hold an rcu read lock to protect memcg + * associated with a kmem folio from being released. */ +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + if (folio_memcg_kmem(folio)) + return obj_cgroup_memcg(__folio_objcg(folio)); + return __folio_memcg(folio); +} + static inline struct mem_cgroup *page_memcg(struct page *page) { - if (PageMemcgKmem(page)) - return obj_cgroup_memcg(__page_objcg(page)); - else - return __page_memcg(page); + return folio_memcg(page_folio(page)); } /* @@ -525,17 +530,18 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) #ifdef CONFIG_MEMCG_KMEM /* - * PageMemcgKmem - check if the page has MemcgKmem flag set - * @page: a pointer to the page struct + * folio_memcg_kmem - Check if the folio has the memcg_kmem flag set. + * @folio: Pointer to the folio. * - * Checks if the page has MemcgKmem flag set. The caller must ensure that - * the page has an associated memory cgroup. It's not safe to call this function - * against some types of pages, e.g. slab pages. + * Checks if the folio has MemcgKmem flag set. The caller must ensure + * that the folio has an associated memory cgroup. It's not safe to call + * this function against some types of folios, e.g. slab folios. */ -static inline bool PageMemcgKmem(struct page *page) +static inline bool folio_memcg_kmem(struct folio *folio) { - VM_BUG_ON_PAGE(page->memcg_data & MEMCG_DATA_OBJCGS, page); - return page->memcg_data & MEMCG_DATA_KMEM; + VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + return folio->memcg_data & MEMCG_DATA_KMEM; } /* @@ -579,7 +585,7 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) } #else -static inline bool PageMemcgKmem(struct page *page) +static inline bool folio_memcg_kmem(struct folio *folio) { return false; } @@ -595,6 +601,11 @@ static inline struct obj_cgroup **page_objcgs_check(struct page *page) } #endif +static inline bool PageMemcgKmem(struct page *page) +{ + return folio_memcg_kmem(page_folio(page)); +} + static __always_inline bool memcg_stat_item_in_bytes(int idx) { if (idx == MEMCG_PERCPU_B) @@ -1106,6 +1117,11 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, #define MEM_CGROUP_ID_SHIFT 0 #define MEM_CGROUP_ID_MAX 0 +static inline struct mem_cgroup *folio_memcg(struct folio *folio) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg(struct page *page) { return NULL; @@ -1122,6 +1138,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page) return NULL; } +static inline bool folio_memcg_kmem(struct folio *folio) +{ + return false; +} + static inline bool PageMemcgKmem(struct page *page) { return false; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1a049bfa0e0a..f0f781dde37a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3050,15 +3050,16 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { + struct folio *folio = page_folio(page); struct obj_cgroup *objcg; unsigned int nr_pages = 1 << order; - if (!PageMemcgKmem(page)) + if (!folio_memcg_kmem(folio)) return; - objcg = __page_objcg(page); + objcg = __folio_objcg(folio); obj_cgroup_uncharge_pages(objcg, nr_pages); - page->memcg_data = 0; + folio->memcg_data = 0; obj_cgroup_put(objcg); } @@ -3290,17 +3291,18 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) */ void split_page_memcg(struct page *head, unsigned int nr) { - struct mem_cgroup *memcg = page_memcg(head); + struct folio *folio = page_folio(head); + struct mem_cgroup *memcg = folio_memcg(folio); int i; if (mem_cgroup_disabled() || !memcg) return; for (i = 1; i < nr; i++) - head[i].memcg_data = head->memcg_data; + folio_page(folio, i)->memcg_data = folio->memcg_data; - if (PageMemcgKmem(head)) - obj_cgroup_get_many(__page_objcg(head), nr - 1); + if (folio_memcg_kmem(folio)) + obj_cgroup_get_many(__folio_objcg(folio), nr - 1); else css_get_many(&memcg->css, nr - 1); } @@ -6835,6 +6837,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { + struct folio *folio = page_folio(page); unsigned long nr_pages; struct mem_cgroup *memcg; struct obj_cgroup *objcg; @@ -6848,14 +6851,14 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * exclusive access to the page. */ if (use_objcg) { - objcg = __page_objcg(page); + objcg = __folio_objcg(folio); /* * This get matches the put at the end of the function and * kmem pages do not hold memcg references anymore. */ memcg = get_mem_cgroup_from_objcg(objcg); } else { - memcg = __page_memcg(page); + memcg = __folio_memcg(folio); } if (!memcg) From patchwork Thu Jul 15 03:35:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 890BDC07E96 for ; Thu, 15 Jul 2021 04:09:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 215C561279 for ; Thu, 15 Jul 2021 04:09:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 215C561279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 75DE16B0109; Thu, 15 Jul 2021 00:09:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 734846B010A; Thu, 15 Jul 2021 00:09:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D58F8D002C; Thu, 15 Jul 2021 00:09:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3A46E6B0109 for ; Thu, 15 Jul 2021 00:09:40 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 21772231D7 for ; Thu, 15 Jul 2021 04:09:39 +0000 (UTC) X-FDA: 78363493278.06.CCD7865 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id C0C5AF00008C for ; Thu, 15 Jul 2021 04:09:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0QLtHOc4x0e1CEnJuFY1gtKEJczX8P1FbBMjaaLDDkY=; b=HHlSmGkc0VgdBD8A59wab7MarI ugRFvpAScixEOeCdwYpUzL6XbijM5Tbd7rRI23wMqwBZ9R1ETFhK7TKWNPRjac44duahBNzUeA6XV KJ3h72W82w58rBBAUNJns9/ZYdLO3mMfcGayBvpVO23X2nfRA9+n379QslCHlbLQDHgTLwEH6fczo BZp4kf0tjjJqtjUbtMa1XpqCXygytJVRhMhrw9jdCwTDWLa3J8LH9HA05z2Nm8x/f+c1/i6arbHEn CBE3vj82oomPw+gFZVKUdNx0gxm3ryekdBIJwybv66RB/UKcJex4eJayXcN7S0ZjgOAW1i2UVSS/C 84BgfWYQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sfV-002wCk-K7; Thu, 15 Jul 2021 04:08:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Michal Hocko Subject: [PATCH v14 039/138] mm/memcg: Convert commit_charge() to take a folio Date: Thu, 15 Jul 2021 04:35:25 +0100 Message-Id: <20210715033704.692967-40-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C0C5AF00008C X-Stat-Signature: 1qxfohijkxypn49djrsjgoxxt5ghsee8 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HHlSmGkc; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626322178-713395 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memcg_data is only set on the head page, so enforce that by typing it as a folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Michal Hocko Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f0f781dde37a..c2ffad021e09 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2769,9 +2769,9 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) } #endif -static void commit_charge(struct page *page, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) { - VM_BUG_ON_PAGE(page_memcg(page), page); + VM_BUG_ON_FOLIO(folio_memcg(folio), folio); /* * Any of the following ensures page's memcg stability: * @@ -2780,7 +2780,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg) * - lock_page_memcg() * - exclusive reference */ - page->memcg_data = (unsigned long)memcg; + folio->memcg_data = (unsigned long)memcg; } static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg) @@ -6684,7 +6684,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) { - unsigned int nr_pages = thp_nr_pages(page); + struct folio *folio = page_folio(page); + unsigned int nr_pages = folio_nr_pages(folio); int ret; ret = try_charge(memcg, gfp, nr_pages); @@ -6692,7 +6693,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, goto out; css_get(&memcg->css); - commit_charge(page, memcg); + commit_charge(folio, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); @@ -6952,21 +6953,21 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) */ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) { + struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages; + unsigned int nr_pages = folio_nr_pages(newfolio); unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); - VM_BUG_ON_PAGE(PageAnon(oldpage) != PageAnon(newpage), newpage); - VM_BUG_ON_PAGE(PageTransHuge(oldpage) != PageTransHuge(newpage), - newpage); + VM_BUG_ON_FOLIO(!folio_test_locked(newfolio), newfolio); + VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_test_anon(newfolio), newfolio); + VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); if (mem_cgroup_disabled()) return; /* Page cache replacement: new page already charged? */ - if (page_memcg(newpage)) + if (folio_memcg(newfolio)) return; memcg = page_memcg(oldpage); @@ -6975,8 +6976,6 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) return; /* Force-charge the new page. The old one will be freed soon */ - nr_pages = thp_nr_pages(newpage); - if (!mem_cgroup_is_root(memcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) @@ -6984,7 +6983,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newpage, memcg); + commit_charge(newfolio, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); From patchwork Thu Jul 15 03:35:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 506ABC07E96 for ; Thu, 15 Jul 2021 04:10:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC84B61279 for ; Thu, 15 Jul 2021 04:10:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC84B61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3DAA86B010B; Thu, 15 Jul 2021 00:10:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B1386B010C; Thu, 15 Jul 2021 00:10:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2517E8D002C; Thu, 15 Jul 2021 00:10:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 0241C6B010B for ; Thu, 15 Jul 2021 00:10:31 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E8A4C235DB for ; Thu, 15 Jul 2021 04:10:30 +0000 (UTC) X-FDA: 78363495420.10.AB08370 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 9597A90000A3 for ; Thu, 15 Jul 2021 04:10:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n9FMNVV0nEJ7hvJZ0pb9gKbIhLknZramBVLPtozcAo0=; b=HNdu5vRyNvBU8U79F7vqYg8xRo C1Om31R9IyH9b9r7a4RA9ciz7r6gr/gqQsdCzHgul5LEIAKSfqYck2O6PJjTeCb+lElFbWW0bpdOr PyLGxudnnkxGtH8UIz1en36AuRWOGFKgzLdCq17FDvCNe169FO+V7jBD3gWjKFKyDxXvr9eL7z9mc K/8h77GHkM5X80zpu8+hGE5xtFmGJsudPttkdtw0n6PE8Q2g78Pmsgk0hW92NyhJYXuloHIYf8b7v NbwJRF8+5Da8cDWTuX5neBxEoefZOy0ELCr6cwS8uE6pONJewx/rRKagOgoofWZStCtvgZU90Gfif dCDrQ7QA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sgL-002wEb-SE; Thu, 15 Jul 2021 04:09:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 040/138] mm/memcg: Convert mem_cgroup_charge() to take a folio Date: Thu, 15 Jul 2021 04:35:26 +0100 Message-Id: <20210715033704.692967-41-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HNdu5vRy; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: gsnxhwtuknwgnjgd8fgqdnu7z13bokkm X-Rspamd-Queue-Id: 9597A90000A3 X-HE-Tag: 1626322230-213806 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all callers of mem_cgroup_charge() to call page_folio() on the page they're currently passing in. Many of them will be converted to use folios themselves soon. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 6 +++--- kernel/events/uprobes.c | 3 ++- mm/filemap.c | 2 +- mm/huge_memory.c | 2 +- mm/khugepaged.c | 4 ++-- mm/ksm.c | 3 ++- mm/memcontrol.c | 26 +++++++++++++------------- mm/memory.c | 9 +++++---- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 32 insertions(+), 29 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index eabae5874161..fb7b87d8e794 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -704,7 +704,7 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) page_counter_read(&memcg->memory); } -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); +int mem_cgroup_charge(struct folio *, struct mm_struct *, gfp_t); int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); @@ -1190,8 +1190,8 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) return false; } -static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask) +static inline int mem_cgroup_charge(struct folio *folio, + struct mm_struct *mm, gfp_t gfp) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index af24dc3febbe..6357c3580d07 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,7 +167,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); + err = mem_cgroup_charge(page_folio(new_page), vma->vm_mm, + GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index a5d02ec62eb6..525f69316522 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *page, page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, NULL, gfp); + error = mem_cgroup_charge(page_folio(page), NULL, gfp); if (error) goto error; charged = true; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index afff3ac87067..ecb1fb1f5f3e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -603,7 +603,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b0412be08fa2..8f6d7fdea9f4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1087,7 +1087,7 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1658,7 +1658,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/ksm.c b/mm/ksm.c index 3fa9bc8a67cf..23d36b59f997 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2580,7 +2580,8 @@ struct page *ksm_might_need_to_copy(struct page *page, return page; /* let do_swap_page report the error */ new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) { + if (new_page && + mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { put_page(new_page); new_page = NULL; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c2ffad021e09..03283d97b62a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6681,10 +6681,9 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, +static int __mem_cgroup_charge(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { - struct folio *folio = page_folio(page); unsigned int nr_pages = folio_nr_pages(folio); int ret; @@ -6697,27 +6696,27 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(page)); + memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: return ret; } /** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode + * mem_cgroup_charge - Charge a newly allocated folio to a cgroup. + * @folio: Folio to charge. + * @mm: mm context of the allocating task. + * @gfp: reclaim mode * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. if @mm is NULL, try to + * Try to charge @folio to the memcg that @mm belongs to, reclaiming + * pages according to @gfp if necessary. If @mm is NULL, try to * charge to the active memcg. * - * Do not use this for pages allocated for swapin. + * Do not use this for folios allocated for swapin. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) +int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, gfp_t gfp) { struct mem_cgroup *memcg; int ret; @@ -6726,7 +6725,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) return 0; memcg = get_mem_cgroup_from_mm(mm); - ret = __mem_cgroup_charge(page, memcg, gfp_mask); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret; @@ -6747,6 +6746,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) { + struct folio *folio = page_folio(page); struct mem_cgroup *memcg; unsigned short id; int ret; @@ -6761,7 +6761,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, memcg = get_mem_cgroup_from_mm(mm); rcu_read_unlock(); - ret = __mem_cgroup_charge(page, memcg, gfp); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret; diff --git a/mm/memory.c b/mm/memory.c index 2f111f9b3dbc..614418e26e2c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -990,7 +990,7 @@ page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma, if (!new_page) return NULL; - if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(new_page), src_mm, GFP_KERNEL)) { put_page(new_page); return NULL; } @@ -3019,7 +3019,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } } - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); @@ -3768,7 +3768,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (!page) goto oom; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); @@ -4183,7 +4183,8 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(vmf->cow_page), vma->vm_mm, + GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index 34a9ad3e0a4f..b5bdae748f82 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2763,7 +2763,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto abort; /* diff --git a/mm/shmem.c b/mm/shmem.c index 70d9ce294bb4..3931fed5c8d8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -685,7 +685,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index = index; if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page, charge_mm, gfp); + error = mem_cgroup_charge(page_folio(page), charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 0e2132834bc7..5d0f55f3c0ed 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -164,7 +164,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), dst_mm, GFP_KERNEL)) goto out_release; ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, From patchwork Thu Jul 15 03:35:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 293BBC07E96 for ; Thu, 15 Jul 2021 04:11:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C608861279 for ; Thu, 15 Jul 2021 04:10:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C608861279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2528F6B010D; Thu, 15 Jul 2021 00:11:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2030E6B010E; Thu, 15 Jul 2021 00:11:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CC178D002C; Thu, 15 Jul 2021 00:11:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id DEA9A6B010D for ; Thu, 15 Jul 2021 00:10:59 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CFF628248047 for ; Thu, 15 Jul 2021 04:10:58 +0000 (UTC) X-FDA: 78363496596.38.E3DACB5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 957579000704 for ; Thu, 15 Jul 2021 04:10:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LhLgT/A81qTgkpFgiK6rglyDaGYwyRzTGflj66w3+ZY=; b=fJrDNGD5uJImOq3WZXtvSxcsiw rg415nhpmS9tMRq8sHBD39yJqJH77myZ7VyLGp4w2Pr8EOGdHv6nIsGh/RgpCSos3Iu8swMnY7M08 X+NaQxvSIVQqxG2hn0+FLlqsqQukn0b+45PNxux9GeP8Oqjwfk8dKKuvzlqhmMj7FZ3xjKRANccnW 29xVRSMZwUIB4bf+UvbtPqr4+zlaW8wYU74D1mQrWPinXXfGT+KcYL38j9IUHoZWacLuTDFbX4j6B FcKArWO0RvQASrPBOac/suYF86ao8h+zhsMkuBaXwCl7c2DzC2TUKnHvLbbuRcT/H3wzeZiWtUCw3 nut/wVHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sgu-002wGw-VB; Thu, 15 Jul 2021 04:09:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 041/138] mm/memcg: Convert uncharge_page() to uncharge_folio() Date: Thu, 15 Jul 2021 04:35:27 +0100 Message-Id: <20210715033704.692967-42-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fJrDNGD5; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: uiqb5btfanuyawuxucxg9ddmpg434k87 X-Rspamd-Queue-Id: 957579000704 X-HE-Tag: 1626322258-204686 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use a folio rather than a page to ensure that we're only operating on base or head pages, and not tail pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/memcontrol.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 03283d97b62a..c257cb71a3b0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6832,24 +6832,23 @@ static void uncharge_batch(const struct uncharge_gather *ug) memcg_check_events(ug->memcg, ug->nid); local_irq_restore(flags); - /* drop reference from uncharge_page */ + /* drop reference from uncharge_folio */ css_put(&ug->memcg->css); } -static void uncharge_page(struct page *page, struct uncharge_gather *ug) +static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { - struct folio *folio = page_folio(page); unsigned long nr_pages; struct mem_cgroup *memcg; struct obj_cgroup *objcg; - bool use_objcg = PageMemcgKmem(page); + bool use_objcg = folio_memcg_kmem(folio); - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* * Nobody should be changing or seriously looking at - * page memcg or objcg at this point, we have fully - * exclusive access to the page. + * folio memcg or objcg at this point, we have fully + * exclusive access to the folio. */ if (use_objcg) { objcg = __folio_objcg(folio); @@ -6871,19 +6870,19 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) uncharge_gather_clear(ug); } ug->memcg = memcg; - ug->nid = page_to_nid(page); + ug->nid = folio_nid(folio); /* pairs with css_put in uncharge_batch */ css_get(&memcg->css); } - nr_pages = compound_nr(page); + nr_pages = folio_nr_pages(folio); if (use_objcg) { ug->nr_memory += nr_pages; ug->nr_kmem += nr_pages; - page->memcg_data = 0; + folio->memcg_data = 0; obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ @@ -6891,7 +6890,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->nr_memory += nr_pages; ug->pgpgout++; - page->memcg_data = 0; + folio->memcg_data = 0; } css_put(&memcg->css); @@ -6915,7 +6914,7 @@ void mem_cgroup_uncharge(struct page *page) return; uncharge_gather_clear(&ug); - uncharge_page(page, &ug); + uncharge_folio(page_folio(page), &ug); uncharge_batch(&ug); } @@ -6929,14 +6928,14 @@ void mem_cgroup_uncharge(struct page *page) void mem_cgroup_uncharge_list(struct list_head *page_list) { struct uncharge_gather ug; - struct page *page; + struct folio *folio; if (mem_cgroup_disabled()) return; uncharge_gather_clear(&ug); - list_for_each_entry(page, page_list, lru) - uncharge_page(page, &ug); + list_for_each_entry(folio, page_list, lru) + uncharge_folio(folio, &ug); if (ug.memcg) uncharge_batch(&ug); } From patchwork Thu Jul 15 03:35:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E839CC07E96 for ; Thu, 15 Jul 2021 04:11:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91D3A61279 for ; Thu, 15 Jul 2021 04:11:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91D3A61279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E86A36B010F; Thu, 15 Jul 2021 00:11:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E373F6B0110; Thu, 15 Jul 2021 00:11:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD8698D002C; Thu, 15 Jul 2021 00:11:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id AAEAD6B010F for ; Thu, 15 Jul 2021 00:11:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9891A20F16 for ; Thu, 15 Jul 2021 04:11:46 +0000 (UTC) X-FDA: 78363498612.25.BFD6842 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 55B5730000AC for ; Thu, 15 Jul 2021 04:11:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FsFx6Bi/ua4uermKj44n4uT/d8N1GdioFQ96wCTnh5U=; b=GvhVNnDG5MJN44qdOjJwr/zP+F 8VTJiOvSTxBuMsZCtH8Pp8jvFck7bakefHwTdP1T3ag0a7QUHS0lM+7rxxCxBAiU+gNT6TxVeANTI 0d8zkLwcxAwqdZK1COSc3+Jlik9SVbhqEVBWYwo98qfuF+RaA67EPx+zfEnWC6SGnpo+NrQxow7h5 XfQ9IrEPbc8baoAWywAoirh+eApbdgwo8AAqYa9dodvrGpeUzMlj1O2UXhEmKWIgpm0dnlTQ8rokA LmlZLAh2f1Lkuo1T/dxjc/IRcvNBdg0aR/gK8EgbEyo1DK8HoyOAITBRkYQXC++sIF7tHE2qgziiI SFlhCk4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3shh-002wIG-FC; Thu, 15 Jul 2021 04:10:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 042/138] mm/memcg: Convert mem_cgroup_uncharge() to take a folio Date: Thu, 15 Jul 2021 04:35:28 +0100 Message-Id: <20210715033704.692967-43-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=GvhVNnDG; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 1ntg5fmn583bfyfj76xb4qxa7mu3x1ms X-Rspamd-Queue-Id: 55B5730000AC X-HE-Tag: 1626322306-565745 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all the callers to call page_folio(). Most of them were already using a head page, but a few of them I can't prove were, so this may actually fix a bug. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 4 ++-- mm/filemap.c | 2 +- mm/khugepaged.c | 4 ++-- mm/memcontrol.c | 14 +++++++------- mm/memory-failure.c | 2 +- mm/memremap.c | 2 +- mm/page_alloc.c | 2 +- mm/swap.c | 2 +- 8 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index fb7b87d8e794..941a1a7131c9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -709,7 +709,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); -void mem_cgroup_uncharge(struct page *page); +void mem_cgroup_uncharge(struct folio *folio); void mem_cgroup_uncharge_list(struct list_head *page_list); void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); @@ -1206,7 +1206,7 @@ static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) { } -static inline void mem_cgroup_uncharge(struct page *page) +static inline void mem_cgroup_uncharge(struct folio *folio) { } diff --git a/mm/filemap.c b/mm/filemap.c index 525f69316522..31d4ecd4268e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -923,7 +923,7 @@ noinline int __add_to_page_cache_locked(struct page *page, if (xas_error(&xas)) { error = xas_error(&xas); if (charged) - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); goto error; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8f6d7fdea9f4..6b9c98ddcd09 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1211,7 +1211,7 @@ static void collapse_huge_page(struct mm_struct *mm, mmap_write_unlock(mm); out_nolock: if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(*hpage); + mem_cgroup_uncharge(page_folio(*hpage)); trace_mm_collapse_huge_page(mm, isolated, result); return; } @@ -1975,7 +1975,7 @@ static void collapse_file(struct mm_struct *mm, out: VM_BUG_ON(!list_empty(&pagelist)); if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(*hpage); + mem_cgroup_uncharge(page_folio(*hpage)); /* TODO: tracepoints */ } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c257cb71a3b0..fc94048e6451 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6897,24 +6897,24 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) } /** - * mem_cgroup_uncharge - uncharge a page - * @page: page to uncharge + * mem_cgroup_uncharge - Uncharge a folio. + * @folio: Folio to uncharge. * - * Uncharge a page previously charged with mem_cgroup_charge(). + * Uncharge a folio previously charged with folio_charge_cgroup(). */ -void mem_cgroup_uncharge(struct page *page) +void mem_cgroup_uncharge(struct folio *folio) { struct uncharge_gather ug; if (mem_cgroup_disabled()) return; - /* Don't touch page->lru of any random page, pre-check: */ - if (!page_memcg(page)) + /* Don't touch folio->lru of any random page, pre-check: */ + if (!folio_memcg(folio)) return; uncharge_gather_clear(&ug); - uncharge_folio(page_folio(page), &ug); + uncharge_folio(folio, &ug); uncharge_batch(&ug); } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index eefd823deb67..9ae7a57a4cc0 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -763,7 +763,7 @@ static int delete_from_lru_cache(struct page *p) * Poisoned page might never drop its ref count to 0 so we have * to uncharge it manually from its memcg. */ - mem_cgroup_uncharge(p); + mem_cgroup_uncharge(page_folio(p)); /* * drop the page count elevated by isolate_lru_page() diff --git a/mm/memremap.c b/mm/memremap.c index 15a074ffb8d7..6eac40f9f62a 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -508,7 +508,7 @@ void free_devmap_managed_page(struct page *page) __ClearPageWaiters(page); - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); /* * When a device_private page is freed, the page->mapping field diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3b97e17806be..d72a0d9d4184 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -726,7 +726,7 @@ static inline void free_the_page(struct page *page, unsigned int order) void free_compound_page(struct page *page) { - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); free_the_page(page, compound_order(page)); } diff --git a/mm/swap.c b/mm/swap.c index 095a5ec6f986..11ff40104a2c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -94,7 +94,7 @@ static void __page_cache_release(struct page *page) static void __put_single_page(struct page *page) { __page_cache_release(page); - mem_cgroup_uncharge(page); + mem_cgroup_uncharge(page_folio(page)); free_unref_page(page, 0); } From patchwork Thu Jul 15 03:35:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D997C07E96 for ; Thu, 15 Jul 2021 04:12:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 36E63610C7 for ; Thu, 15 Jul 2021 04:12:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36E63610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 892966B0111; Thu, 15 Jul 2021 00:12:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 842C06B0112; Thu, 15 Jul 2021 00:12:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E47F8D002C; Thu, 15 Jul 2021 00:12:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 4D8326B0111 for ; Thu, 15 Jul 2021 00:12:19 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4042C8248047 for ; Thu, 15 Jul 2021 04:12:18 +0000 (UTC) X-FDA: 78363499956.40.B7A21F6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id EFF0330000A5 for ; Thu, 15 Jul 2021 04:12:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QBV0Ef3qEWScyQdT9HDYPzKB2faCtvFJrd0hqsE53I8=; b=sD3+Xt4p+vX7sT7oyetDzpc5E+ sE2/+j+uJG1r4xJOdOA1VHkzuHhcMRdJs7eA6e6Rsu6fLN0KxJ1BKUo04HV6uPpMIs7PuOIYMdN0F ibJudhKAMptxdIc1HcomDzD7404E+feRP6ncIMEJb9B5V+8NJyG/ibkZIpmRWTVHSMbl2UEdTYhc6 W25jbPV8kzsyZQZwh9gPopYgQZ3iuevf0xsntF7vvA97nFRSgMY5m0joGwksayH9yHxq7BiJMHVze iS1XUe4opcVT5qkKDalWTbnbD8ydKv1Khdtve25AhrgQC1K4cfRQDtMkZDVxsju//agcjm4tfJn2I /+ZFetOw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3siQ-002wIz-Sy; Thu, 15 Jul 2021 04:11:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 043/138] mm/memcg: Convert mem_cgroup_migrate() to take folios Date: Thu, 15 Jul 2021 04:35:29 +0100 Message-Id: <20210715033704.692967-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sD3+Xt4p; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EFF0330000A5 X-Stat-Signature: souokdn84x48g38er67zfmdnimkkweie X-HE-Tag: 1626322337-693219 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all callers of mem_cgroup_migrate() to call page_folio() first. They all look like they're using head pages already, but this proves it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Mike Rapoport Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 4 ++-- mm/filemap.c | 4 +++- mm/memcontrol.c | 35 +++++++++++++++++------------------ mm/migrate.c | 4 +++- mm/shmem.c | 5 ++++- 5 files changed, 29 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 941a1a7131c9..d75a708eac13 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -712,7 +712,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); void mem_cgroup_uncharge(struct folio *folio); void mem_cgroup_uncharge_list(struct list_head *page_list); -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); +void mem_cgroup_migrate(struct folio *old, struct folio *new); /** * mem_cgroup_lruvec - get the lru list vector for a memcg & node @@ -1214,7 +1214,7 @@ static inline void mem_cgroup_uncharge_list(struct list_head *page_list) { } -static inline void mem_cgroup_migrate(struct page *old, struct page *new) +static inline void mem_cgroup_migrate(struct folio *old, struct folio *new) { } diff --git a/mm/filemap.c b/mm/filemap.c index 31d4ecd4268e..5c4e3185ecb3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -817,6 +817,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); */ void replace_page_cache_page(struct page *old, struct page *new) { + struct folio *fold = page_folio(old); + struct folio *fnew = page_folio(new); struct address_space *mapping = old->mapping; void (*freepage)(struct page *) = mapping->a_ops->freepage; pgoff_t offset = old->index; @@ -831,7 +833,7 @@ void replace_page_cache_page(struct page *old, struct page *new) new->mapping = mapping; new->index = offset; - mem_cgroup_migrate(old, new); + mem_cgroup_migrate(fold, fnew); xas_lock_irqsave(&xas, flags); xas_store(&xas, new); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fc94048e6451..92bbced86bdb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6941,36 +6941,35 @@ void mem_cgroup_uncharge_list(struct list_head *page_list) } /** - * mem_cgroup_migrate - charge a page's replacement - * @oldpage: currently circulating page - * @newpage: replacement page + * mem_cgroup_migrate - Charge a folio's replacement. + * @old: Currently circulating folio. + * @new: Replacement folio. * - * Charge @newpage as a replacement page for @oldpage. @oldpage will + * Charge @new as a replacement folio for @old. @old will * be uncharged upon free. * - * Both pages must be locked, @newpage->mapping must be set up. + * Both folios must be locked, @new->mapping must be set up. */ -void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) +void mem_cgroup_migrate(struct folio *old, struct folio *new) { - struct folio *newfolio = page_folio(newpage); struct mem_cgroup *memcg; - unsigned int nr_pages = folio_nr_pages(newfolio); + unsigned int nr_pages = folio_nr_pages(new); unsigned long flags; - VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_FOLIO(!folio_test_locked(newfolio), newfolio); - VM_BUG_ON_FOLIO(PageAnon(oldpage) != folio_test_anon(newfolio), newfolio); - VM_BUG_ON_FOLIO(compound_nr(oldpage) != nr_pages, newfolio); + VM_BUG_ON_FOLIO(!folio_test_locked(old), old); + VM_BUG_ON_FOLIO(!folio_test_locked(new), new); + VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new); + VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages, new); if (mem_cgroup_disabled()) return; - /* Page cache replacement: new page already charged? */ - if (folio_memcg(newfolio)) + /* Page cache replacement: new folio already charged? */ + if (folio_memcg(new)) return; - memcg = page_memcg(oldpage); - VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); + memcg = folio_memcg(old); + VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6982,11 +6981,11 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) } css_get(&memcg->css); - commit_charge(newfolio, memcg); + commit_charge(new, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(newpage)); + memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); } diff --git a/mm/migrate.c b/mm/migrate.c index b5bdae748f82..910552318df3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -541,6 +541,8 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, */ void migrate_page_states(struct page *newpage, struct page *page) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int cpupid; if (PageError(page)) @@ -608,7 +610,7 @@ void migrate_page_states(struct page *newpage, struct page *page) copy_page_owner(page, newpage); if (!PageHuge(page)) - mem_cgroup_migrate(page, newpage); + mem_cgroup_migrate(folio, newfolio); } EXPORT_SYMBOL(migrate_page_states); diff --git a/mm/shmem.c b/mm/shmem.c index 3931fed5c8d8..2fd75b4d4974 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1619,6 +1619,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct page *oldpage, *newpage; + struct folio *old, *new; struct address_space *swap_mapping; swp_entry_t entry; pgoff_t swap_index; @@ -1655,7 +1656,9 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, xa_lock_irq(&swap_mapping->i_pages); error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); if (!error) { - mem_cgroup_migrate(oldpage, newpage); + old = page_folio(oldpage); + new = page_folio(newpage); + mem_cgroup_migrate(old, new); __inc_lruvec_page_state(newpage, NR_FILE_PAGES); __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); } From patchwork Thu Jul 15 03:35:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26041C07E96 for ; Thu, 15 Jul 2021 04:13:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC72A610C7 for ; Thu, 15 Jul 2021 04:13:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC72A610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 18BE46B0113; Thu, 15 Jul 2021 00:13:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1159F6B0114; Thu, 15 Jul 2021 00:13:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF7EE8D002C; Thu, 15 Jul 2021 00:13:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id C2E5E6B0113 for ; Thu, 15 Jul 2021 00:13:16 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B7EFF23107 for ; Thu, 15 Jul 2021 04:13:15 +0000 (UTC) X-FDA: 78363502350.17.B5F017A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 6D6DFD0000B0 for ; Thu, 15 Jul 2021 04:13:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BdoAHA2ru4kTgi8z9gwqyWzIyxADlec1I5LhK9PXph4=; b=I2QG9LRtXEdKN6mvYhbJAUscJd yxeDXtrVw8o9C8PSRv4TPSp0RRmDQBXMtAIVXvvWiW4CdqUIWLVJr/vhsUlFpkCcWltED57tIwpj6 rXRbQOj9KILnUSq6z9NQq92AHyjKhE8II6SpuRXBvDK0Zulox2y/6LsGOxMJCyB/ck/eect6sNoDT Pbn/bDeSKaR/LxH4/2mFrs5hey5YYDvAkknehA0ikq83xTbj6ySazxUhhGNFaJ65brmz8gIxK7dIm LdUUt8Txw6h+yRri/4XuEL6CuJg4DlywVv0V+JmGiM2j9BUhkcLmmx4qHgm+W4VD7yz7uuTAthzCH RNHhMUnQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sj6-002wLu-10; Thu, 15 Jul 2021 04:12:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 044/138] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Date: Thu, 15 Jul 2021 04:35:30 +0100 Message-Id: <20210715033704.692967-45-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I2QG9LRt; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: hund1x7rizswfpgmai5yncgs8hpq5ax3 X-Rspamd-Queue-Id: 6D6DFD0000B0 X-Rspamd-Server: rspam01 X-HE-Tag: 1626322395-681713 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page was only being used for the memcg and to gather trace information, so this is a simple conversion. The only caller of mem_cgroup_track_foreign_dirty() will be converted to folios in a later patch, so doing this now makes that patch simpler. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 7 ++++--- include/trace/events/writeback.h | 8 ++++---- mm/memcontrol.c | 6 +++--- 3 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d75a708eac13..20084c47d2ca 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1560,17 +1560,18 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, unsigned long *pheadroom, unsigned long *pdirty, unsigned long *pwriteback); -void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, +void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb); static inline void mem_cgroup_track_foreign_dirty(struct page *page, struct bdi_writeback *wb) { + struct folio *folio = page_folio(page); if (mem_cgroup_disabled()) return; - if (unlikely(&page_memcg(page)->css != wb->memcg_css)) - mem_cgroup_track_foreign_dirty_slowpath(page, wb); + if (unlikely(&folio_memcg(folio)->css != wb->memcg_css)) + mem_cgroup_track_foreign_dirty_slowpath(folio, wb); } void mem_cgroup_flush_foreign(struct bdi_writeback *wb); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index 840d1ba84cf5..297871ca0004 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -236,9 +236,9 @@ TRACE_EVENT(inode_switch_wbs, TRACE_EVENT(track_foreign_dirty, - TP_PROTO(struct page *page, struct bdi_writeback *wb), + TP_PROTO(struct folio *folio, struct bdi_writeback *wb), - TP_ARGS(page, wb), + TP_ARGS(folio, wb), TP_STRUCT__entry( __array(char, name, 32) @@ -250,7 +250,7 @@ TRACE_EVENT(track_foreign_dirty, ), TP_fast_assign( - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio_mapping(folio); struct inode *inode = mapping ? mapping->host : NULL; strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32); @@ -258,7 +258,7 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino = inode ? inode->i_ino : 0; __entry->memcg_id = wb->memcg_css->id; __entry->cgroup_ino = __trace_wb_assign_cgroup(wb); - __entry->page_cgroup_ino = cgroup_ino(page_memcg(page)->css.cgroup); + __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup); ), TP_printk("bdi %s[%llu]: ino=%lu memcg_id=%u cgroup_ino=%lu page_cgroup_ino=%lu", diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 92bbced86bdb..96c34357fbca 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4571,17 +4571,17 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, * As being wrong occasionally doesn't matter, updates and accesses to the * records are lockless and racy. */ -void mem_cgroup_track_foreign_dirty_slowpath(struct page *page, +void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg = folio_memcg(folio); struct memcg_cgwb_frn *frn; u64 now = get_jiffies_64(); u64 oldest_at = now; int oldest = -1; int i; - trace_track_foreign_dirty(page, wb); + trace_track_foreign_dirty(folio, wb); /* * Pick the slot to use. If there is already a slot for @wb, keep From patchwork Thu Jul 15 03:35:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1242FC1B08C for ; Thu, 15 Jul 2021 04:14:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 984E7610C7 for ; Thu, 15 Jul 2021 04:13:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 984E7610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E84546B0115; Thu, 15 Jul 2021 00:13:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E32EA8D002C; Thu, 15 Jul 2021 00:13:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAD516B0117; Thu, 15 Jul 2021 00:13:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id A0CA66B0115 for ; Thu, 15 Jul 2021 00:13:59 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7F4D518036D58 for ; Thu, 15 Jul 2021 04:13:58 +0000 (UTC) X-FDA: 78363504156.26.664CB42 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 21FB260019B6 for ; Thu, 15 Jul 2021 04:13:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7SMIlDbXKPWxLzuFnfZFts4v50wHRfeaO4Y4aZlCsWU=; b=MS1ehyKqO/+QGbr7/SKPVCvJv2 J+7FyHQyPVd6mECMoRQjvoacC5SIbNpviHhN2fh5vbzPDmwYoQWcp+a+4993J290fGZ+89kOQqo22 Xt7xBgD7nJHfpdCOgHnTW9o8fnqVh4k5hhzRELA9JqzFsYlrgwd1h8T10nF89w4U2xfce2Llf3JC5 Zxvm0cANkk+ETbGQoA+lq8M9Irgz8NwFIf2i7gA1G9srXvRF7PpjLf8OKmIYV8KXEWZLqlJSIZqhO unNQ1VYUQ52jij5oARMzBqxXXL1hrPv48ODgo/dPY94qbF2wmEve0xRfeWcREdxJS0G/KrSkNgMF5 BVx7qebA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sjm-002wO1-3X; Thu, 15 Jul 2021 04:12:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 045/138] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Date: Thu, 15 Jul 2021 04:35:31 +0100 Message-Id: <20210715033704.692967-46-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 21FB260019B6 X-Stat-Signature: h37b34dfjdjf4rhe3pqy6pa7xt845f9u Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MS1ehyKq; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626322438-602136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_memcg() and unlock_page_memcg(). lock_page_memcg() and unlock_page_memcg() have too many callers to be easily replaced in a single patch, so reimplement them as wrappers for now to be cleaned up later when enough callers have been converted to use folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 10 +++++++++ mm/memcontrol.c | 45 ++++++++++++++++++++++++-------------- 2 files changed, 39 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20084c47d2ca..4b79dd6b3a9c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -950,6 +950,8 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); extern bool cgroup_memory_noswap; #endif +void folio_memcg_lock(struct folio *folio); +void folio_memcg_unlock(struct folio *folio); void lock_page_memcg(struct page *page); void unlock_page_memcg(struct page *page); @@ -1367,6 +1369,14 @@ static inline void unlock_page_memcg(struct page *page) { } +static inline void folio_memcg_lock(struct folio *folio) +{ +} + +static inline void folio_memcg_unlock(struct folio *folio) +{ +} + static inline void mem_cgroup_handle_over_high(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 96c34357fbca..0dd40ea67a90 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1965,18 +1965,17 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg) } /** - * lock_page_memcg - lock a page and memcg binding - * @page: the page + * folio_memcg_lock - Bind a folio to its memcg. + * @folio: The folio. * - * This function protects unlocked LRU pages from being moved to + * This function prevents unlocked LRU folios from being moved to * another cgroup. * - * It ensures lifetime of the locked memcg. Caller is responsible - * for the lifetime of the page. + * It ensures lifetime of the bound memcg. The caller is responsible + * for the lifetime of the folio. */ -void lock_page_memcg(struct page *page) +void folio_memcg_lock(struct folio *folio) { - struct page *head = compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; unsigned long flags; @@ -1990,7 +1989,7 @@ void lock_page_memcg(struct page *page) if (mem_cgroup_disabled()) return; again: - memcg = page_memcg(head); + memcg = folio_memcg(folio); if (unlikely(!memcg)) return; @@ -2004,7 +2003,7 @@ void lock_page_memcg(struct page *page) return; spin_lock_irqsave(&memcg->move_lock, flags); - if (memcg != page_memcg(head)) { + if (memcg != folio_memcg(folio)) { spin_unlock_irqrestore(&memcg->move_lock, flags); goto again; } @@ -2018,9 +2017,15 @@ void lock_page_memcg(struct page *page) memcg->move_lock_task = current; memcg->move_lock_flags = flags; } +EXPORT_SYMBOL(folio_memcg_lock); + +void lock_page_memcg(struct page *page) +{ + folio_memcg_lock(page_folio(page)); +} EXPORT_SYMBOL(lock_page_memcg); -static void __unlock_page_memcg(struct mem_cgroup *memcg) +static void __folio_memcg_unlock(struct mem_cgroup *memcg) { if (memcg && memcg->move_lock_task == current) { unsigned long flags = memcg->move_lock_flags; @@ -2035,14 +2040,22 @@ static void __unlock_page_memcg(struct mem_cgroup *memcg) } /** - * unlock_page_memcg - unlock a page and memcg binding - * @page: the page + * folio_memcg_unlock - Release the binding between a folio and its memcg. + * @folio: The folio. + * + * This releases the binding created by folio_memcg_lock(). This does + * not change the accounting of this folio to its memcg, but it does + * permit others to change it. */ -void unlock_page_memcg(struct page *page) +void folio_memcg_unlock(struct folio *folio) { - struct page *head = compound_head(page); + __folio_memcg_unlock(folio_memcg(folio)); +} +EXPORT_SYMBOL(folio_memcg_unlock); - __unlock_page_memcg(page_memcg(head)); +void unlock_page_memcg(struct page *page) +{ + folio_memcg_unlock(page_folio(page)); } EXPORT_SYMBOL(unlock_page_memcg); @@ -5666,7 +5679,7 @@ static int mem_cgroup_move_account(struct page *page, page->memcg_data = (unsigned long)to; - __unlock_page_memcg(from); + __folio_memcg_unlock(from); ret = 0; nid = page_to_nid(page); From patchwork Thu Jul 15 03:35:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFC76C1B08C for ; Thu, 15 Jul 2021 04:14:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6373F6109E for ; Thu, 15 Jul 2021 04:14:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6373F6109E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B754A6B0117; Thu, 15 Jul 2021 00:14:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4C196B0118; Thu, 15 Jul 2021 00:14:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9ED4E6B0119; Thu, 15 Jul 2021 00:14:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 7C6EC6B0117 for ; Thu, 15 Jul 2021 00:14:52 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6C5118248047 for ; Thu, 15 Jul 2021 04:14:51 +0000 (UTC) X-FDA: 78363506382.28.CB817DE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 2D3631917 for ; Thu, 15 Jul 2021 04:14:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5j2X8nBiTF1CrIxwWHedWUAM3oUKCMvVlZFvZear/v0=; b=nQgqCL3M6qTgq6k0o7ut2tLqLF hH+PfA6JrVnEIHHtnbjJr0HBTE1SHmMtQ8TTzgiIKT306JFp9jk0zgdodcDF25u2siajORCpPkFdi PE+lHbB2QMJY0vZlZyUDpIgeNaiBvAU5Ug7v1xOsV0XUzXwos7hLoHSaX+SEOXDjmhm9S8lBJ6cQO GpQxddk1pPGyz5dkdhbqoTyOjunx3tNMFkRiJiTu7MKO6kpCNsqCP5+gMzXQX0wwAqAIq4vSqvlff TSDOvei9UPx9rf9bJBO7bX0no7SclplfF+bMQCN2BPIv63eJ0sY7Kq4Lqusa5MZ+x10ysZT/3iQnR DzWfg68A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3skN-002wQz-S6; Thu, 15 Jul 2021 04:13:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 046/138] mm/memcg: Convert mem_cgroup_move_account() to use a folio Date: Thu, 15 Jul 2021 04:35:32 +0100 Message-Id: <20210715033704.692967-47-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nQgqCL3M; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: hcez7g7hwzb6haa78795zh1ai4g9fbxa X-Rspamd-Queue-Id: 2D3631917 X-HE-Tag: 1626322491-702616 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves dozens of bytes of text by eliminating a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- mm/memcontrol.c | 37 +++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0dd40ea67a90..96d6e6c0a65d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5590,38 +5590,39 @@ static int mem_cgroup_move_account(struct page *page, struct mem_cgroup *from, struct mem_cgroup *to) { + struct folio *folio = page_folio(page); struct lruvec *from_vec, *to_vec; struct pglist_data *pgdat; - unsigned int nr_pages = compound ? thp_nr_pages(page) : 1; + unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1; int nid, ret; VM_BUG_ON(from == to); - VM_BUG_ON_PAGE(PageLRU(page), page); - VM_BUG_ON(compound && !PageTransHuge(page)); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); + VM_BUG_ON(compound && !folio_multi(folio)); /* * Prevent mem_cgroup_migrate() from looking at * page's memory cgroup of its source page while we change it. */ ret = -EBUSY; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; ret = -EINVAL; - if (page_memcg(page) != from) + if (folio_memcg(folio) != from) goto out_unlock; - pgdat = page_pgdat(page); + pgdat = folio_pgdat(folio); from_vec = mem_cgroup_lruvec(from, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat); - lock_page_memcg(page); + folio_memcg_lock(folio); - if (PageAnon(page)) { - if (page_mapped(page)) { + if (folio_test_anon(folio)) { + if (folio_mapped(folio)) { __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); - if (PageTransHuge(page)) { + if (folio_test_transhuge(folio)) { __mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_THPS, @@ -5632,18 +5633,18 @@ static int mem_cgroup_move_account(struct page *page, __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); - if (PageSwapBacked(page)) { + if (folio_test_swapbacked(folio)) { __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages); __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages); } - if (page_mapped(page)) { + if (folio_mapped(folio)) { __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); } - if (PageDirty(page)) { - struct address_space *mapping = page_mapping(page); + if (folio_test_dirty(folio)) { + struct address_space *mapping = folio_mapping(folio); if (mapping_can_writeback(mapping)) { __mod_lruvec_state(from_vec, NR_FILE_DIRTY, @@ -5654,7 +5655,7 @@ static int mem_cgroup_move_account(struct page *page, } } - if (PageWriteback(page)) { + if (folio_test_writeback(folio)) { __mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages); __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages); } @@ -5677,12 +5678,12 @@ static int mem_cgroup_move_account(struct page *page, css_get(&to->css); css_put(&from->css); - page->memcg_data = (unsigned long)to; + folio->memcg_data = (unsigned long)to; __folio_memcg_unlock(from); ret = 0; - nid = page_to_nid(page); + nid = folio_nid(folio); local_irq_disable(); mem_cgroup_charge_statistics(to, nr_pages); @@ -5691,7 +5692,7 @@ static int mem_cgroup_move_account(struct page *page, memcg_check_events(from, nid); local_irq_enable(); out_unlock: - unlock_page(page); + folio_unlock(folio); out: return ret; } From patchwork Thu Jul 15 03:35:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 555B4C07E96 for ; Thu, 15 Jul 2021 04:16:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED4A561289 for ; Thu, 15 Jul 2021 04:16:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED4A561289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BBF08D002C; Thu, 15 Jul 2021 00:16:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46C186B0136; Thu, 15 Jul 2021 00:16:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3336E8D002C; Thu, 15 Jul 2021 00:16:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 0BDBB6B0135 for ; Thu, 15 Jul 2021 00:16:35 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EB4FA180A2F70 for ; Thu, 15 Jul 2021 04:16:33 +0000 (UTC) X-FDA: 78363510666.33.7799DE3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id AE6F2E00181B for ; Thu, 15 Jul 2021 04:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uUR18RhmJaxye44Jc3cuyF0IusRkdQv8pQzStDPWSSc=; b=SsjzMEhsSUfNPMdPdVr1Rv00WL iblExqXCV2471LMCh3TsJF9Jvb9IyROFwL8b112MdO4Usn2MxF6ank0hRqvbejBBNiU8+U09OXzcV V61YzM7nbqlHIJS3QganRgrwohtxTcYS4PkvbYoyuOay3KW2jYF7dqW5J1eVxKKGYouBcpxQSVGtN p6dqBucHiHHWpHvdYh1JlZv49jDqw6hnzR3KZ+SjFG7esiW1e500Mh42F+hktx15bBxeY1/GlldC7 +kIRb1mhD0tbUE+3FJ7E3LgeDjmAcKlqqQGkZ97IIpn1YoZt/zB6p52fVBheKvAlY6VtlzZEWUSey vzEpOIsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3slN-002wVE-TY; Thu, 15 Jul 2021 04:14:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 047/138] mm/memcg: Add folio_lruvec() Date: Thu, 15 Jul 2021 04:35:33 +0100 Message-Id: <20210715033704.692967-48-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AE6F2E00181B X-Stat-Signature: zy3hi5k616yoepipacs1uq7d3q419193 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SsjzMEhs; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626322593-231386 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This replaces mem_cgroup_page_lruvec(). All callers converted. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 20 +++++++++----------- mm/compaction.c | 2 +- mm/memcontrol.c | 9 ++++++--- mm/swap.c | 3 ++- mm/workingset.c | 3 ++- 5 files changed, 20 insertions(+), 17 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4b79dd6b3a9c..4eb329b5d183 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -751,18 +751,17 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, } /** - * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page - * @page: the page + * folio_lruvec - return lruvec for isolating/putting an LRU folio + * @folio: Pointer to the folio. * - * This function relies on page->mem_cgroup being stable. + * This function relies on folio->mem_cgroup being stable. */ -static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec(struct folio *folio) { - pg_data_t *pgdat = page_pgdat(page); - struct mem_cgroup *memcg = page_memcg(page); + struct mem_cgroup *memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page); - return mem_cgroup_lruvec(memcg, pgdat); + VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled(), folio); + return mem_cgroup_lruvec(memcg, folio_pgdat(folio)); } struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); @@ -1226,10 +1225,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, return &pgdat->__lruvec; } -static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec(struct folio *folio) { - pg_data_t *pgdat = page_pgdat(page); - + struct pglist_data *pgdat = folio_pgdat(folio); return &pgdat->__lruvec; } diff --git a/mm/compaction.c b/mm/compaction.c index 621508e0ecd5..a88f7b893f80 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1028,7 +1028,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; - lruvec = mem_cgroup_page_lruvec(page); + lruvec = folio_lruvec(page_folio(page)); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 96d6e6c0a65d..fd578d70b579 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1186,9 +1186,10 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) */ struct lruvec *lock_page_lruvec(struct page *page) { + struct folio *folio = page_folio(page); struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page); + lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); lruvec_memcg_debug(lruvec, page); @@ -1198,9 +1199,10 @@ struct lruvec *lock_page_lruvec(struct page *page) struct lruvec *lock_page_lruvec_irq(struct page *page) { + struct folio *folio = page_folio(page); struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page); + lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); lruvec_memcg_debug(lruvec, page); @@ -1210,9 +1212,10 @@ struct lruvec *lock_page_lruvec_irq(struct page *page) struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) { + struct folio *folio = page_folio(page); struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page); + lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); lruvec_memcg_debug(lruvec, page); diff --git a/mm/swap.c b/mm/swap.c index 11ff40104a2c..4ba77fc8da4f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -315,7 +315,8 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_page(struct page *page) { - lru_note_cost(mem_cgroup_page_lruvec(page), + struct folio *folio = page_folio(page); + lru_note_cost(folio_lruvec(folio), page_is_file_lru(page), thp_nr_pages(page)); } diff --git a/mm/workingset.c b/mm/workingset.c index 5ba3e42446fa..e62c0f2084a2 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -396,6 +396,7 @@ void workingset_refault(struct page *page, void *shadow) */ void workingset_activation(struct page *page) { + struct folio *folio = page_folio(page); struct mem_cgroup *memcg; struct lruvec *lruvec; @@ -410,7 +411,7 @@ void workingset_activation(struct page *page) memcg = page_memcg_rcu(page); if (!mem_cgroup_disabled() && !memcg) goto out; - lruvec = mem_cgroup_page_lruvec(page); + lruvec = folio_lruvec(folio); workingset_age_nonresident(lruvec, thp_nr_pages(page)); out: rcu_read_unlock(); From patchwork Thu Jul 15 03:35:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41533C07E96 for ; Thu, 15 Jul 2021 04:17:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE55561289 for ; Thu, 15 Jul 2021 04:17:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE55561289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2BF236B0136; Thu, 15 Jul 2021 00:17:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2962B8D002E; Thu, 15 Jul 2021 00:17:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 137468D002D; Thu, 15 Jul 2021 00:17:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id DF4E06B0136 for ; Thu, 15 Jul 2021 00:17:12 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CC313183F91E2 for ; Thu, 15 Jul 2021 04:17:11 +0000 (UTC) X-FDA: 78363512262.39.EDEBD6C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 59700B00009D for ; Thu, 15 Jul 2021 04:17:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=h31rzkRVs/KZO/WJ/tgtHCpKgqrmL+sUqzYEHjDWW6k=; b=EOx/JHl8BXaOdUGmy1UFsfoR6I R0uAF79pYnfYWa6hlIvyr68p2hitTh9pCWGnYCF4VVyjA0VHf/N6pbtCLMN0vWU829s9udoinexIe T5uViA6Lh4M4Ger4oj+HtVI5of7hUIL/jwFJcU8/4Gm6RSys/ZcOg7Bxvu+jyzw0EhfWfR7JRAiyE luWFdDCj9Zw2hl5dwvFRvhjyHWFep1y34cvkQCp6c9v7P+RKLWaEz8QnN+oqaYlUBWlQYmeB67x9a thI7RO+XKG85K+P9GEmF1QaT4gRReieMEdzy0crsGzqGQgDXOtNk8+JSdC8lfdaBydhy3M/jLKO1b OismMS0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3smK-002wYM-CN; Thu, 15 Jul 2021 04:15:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 048/138] mm/memcg: Add folio_lruvec_lock() and similar functions Date: Thu, 15 Jul 2021 04:35:34 +0100 Message-Id: <20210715033704.692967-49-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="EOx/JHl8"; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: xtio1g7rm37aomi7e69gbyf5bhzesjmx X-Rspamd-Queue-Id: 59700B00009D X-HE-Tag: 1626322631-23682 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_lruvec() and similar functions. Also convert lruvec_memcg_debug() to take a folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 32 ++++++++++++++----------- mm/compaction.c | 2 +- mm/huge_memory.c | 5 ++-- mm/memcontrol.c | 48 ++++++++++++++++---------------------- mm/rmap.c | 2 +- mm/swap.c | 8 ++++--- mm/vmscan.c | 3 ++- 7 files changed, 50 insertions(+), 50 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4eb329b5d183..ffb591920241 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -768,15 +768,16 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); -struct lruvec *lock_page_lruvec(struct page *page); -struct lruvec *lock_page_lruvec_irq(struct page *page); -struct lruvec *lock_page_lruvec_irqsave(struct page *page, +struct lruvec *folio_lruvec_lock(struct folio *folio); +struct lruvec *folio_lruvec_lock_irq(struct folio *folio); +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); #else -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } #endif @@ -1231,7 +1232,8 @@ static inline struct lruvec *folio_lruvec(struct folio *folio) return &pgdat->__lruvec; } -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } @@ -1261,26 +1263,26 @@ static inline void mem_cgroup_put(struct mem_cgroup *memcg) { } -static inline struct lruvec *lock_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } -static inline struct lruvec *lock_page_lruvec_irq(struct page *page) +static inline struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock_irq(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } -static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flagsp) { - struct pglist_data *pgdat = page_pgdat(page); + struct pglist_data *pgdat = folio_pgdat(folio); spin_lock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp); return &pgdat->__lruvec; @@ -1537,6 +1539,7 @@ static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec) static inline struct lruvec *relock_page_lruvec_irq(struct page *page, struct lruvec *locked_lruvec) { + struct folio *folio = page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1544,13 +1547,14 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page, unlock_page_lruvec_irq(locked_lruvec); } - return lock_page_lruvec_irq(page); + return folio_lruvec_lock_irq(folio); } /* Don't lock again iff page's lruvec locked */ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, struct lruvec *locked_lruvec, unsigned long *flags) { + struct folio *folio = page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1558,7 +1562,7 @@ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, unlock_page_lruvec_irqrestore(locked_lruvec, *flags); } - return lock_page_lruvec_irqsave(page, flags); + return folio_lruvec_lock_irqsave(folio, flags); } #ifdef CONFIG_CGROUP_WRITEBACK diff --git a/mm/compaction.c b/mm/compaction.c index a88f7b893f80..6f77577be248 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1038,7 +1038,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, page_folio(page)); /* Try get exclusive access under lock */ if (!skip_updated) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ecb1fb1f5f3e..763bf687ca92 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2431,7 +2431,8 @@ static void __split_huge_page_tail(struct page *head, int tail, static void __split_huge_page(struct page *page, struct list_head *list, pgoff_t end) { - struct page *head = compound_head(page); + struct folio *folio = page_folio(page); + struct page *head = &folio->page; struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; @@ -2450,7 +2451,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, } /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = lock_page_lruvec(head); + lruvec = folio_lruvec_lock(folio); for (i = nr - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fd578d70b579..5935f06316b1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1158,67 +1158,59 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, } #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { struct mem_cgroup *memcg; if (mem_cgroup_disabled()) return; - memcg = page_memcg(page); + memcg = folio_memcg(folio); if (!memcg) - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != root_mem_cgroup, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio); else - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != memcg, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio); } #endif /** - * lock_page_lruvec - lock and return lruvec for a given page. - * @page: the page + * folio_lruvec_lock - lock and return lruvec for a given folio. + * @folio: Pointer to the folio. * * These functions are safe to use under any of the following conditions: - * - page locked - * - PageLRU cleared - * - lock_page_memcg() - * - page->_refcount is zero + * - folio locked + * - folio_test_lru false + * - folio_memcg_lock() + * - folio frozen (refcount of 0) */ -struct lruvec *lock_page_lruvec(struct page *page) +struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct folio *folio = page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec = folio_lruvec(folio); - lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } -struct lruvec *lock_page_lruvec_irq(struct page *page) +struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct folio *folio = page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec = folio_lruvec(folio); - lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } -struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, + unsigned long *flags) { - struct folio *folio = page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec = folio_lruvec(folio); - lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); return lruvec; } diff --git a/mm/rmap.c b/mm/rmap.c index b9eb5c12f3fe..1df8683c4c4c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -33,7 +33,7 @@ * mapping->private_lock (in __set_page_dirty_buffers) * lock_page_memcg move_lock (in __set_page_dirty_buffers) * i_pages lock (widely used) - * lruvec->lru_lock (in lock_page_lruvec_irq) + * lruvec->lru_lock (in folio_lruvec_lock_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty) * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) * sb_lock (within inode_lock in fs/fs-writeback.c) diff --git a/mm/swap.c b/mm/swap.c index 4ba77fc8da4f..6d0d2bfca48e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -80,10 +80,11 @@ static DEFINE_PER_CPU(struct lru_pvecs, lru_pvecs) = { static void __page_cache_release(struct page *page) { if (PageLRU(page)) { + struct folio *folio = page_folio(page); struct lruvec *lruvec; unsigned long flags; - lruvec = lock_page_lruvec_irqsave(page, &flags); + lruvec = folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); @@ -372,11 +373,12 @@ static inline void activate_page_drain(int cpu) static void activate_page(struct page *page) { + struct folio *folio = page_folio(page); struct lruvec *lruvec; - page = compound_head(page); + page = &folio->page; if (TestClearPageLRU(page)) { - lruvec = lock_page_lruvec_irq(page); + lruvec = folio_lruvec_lock_irq(folio); __activate_page(page, lruvec); unlock_page_lruvec_irq(lruvec); SetPageLRU(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 4620df62f0ff..0d48306d37dc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1965,6 +1965,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, */ int isolate_lru_page(struct page *page) { + struct folio *folio = page_folio(page); int ret = -EBUSY; VM_BUG_ON_PAGE(!page_count(page), page); @@ -1974,7 +1975,7 @@ int isolate_lru_page(struct page *page) struct lruvec *lruvec; get_page(page); - lruvec = lock_page_lruvec_irq(page); + lruvec = folio_lruvec_lock_irq(folio); del_page_from_lru_list(page, lruvec); unlock_page_lruvec_irq(lruvec); ret = 0; From patchwork Thu Jul 15 03:35:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECDE3C07E96 for ; Thu, 15 Jul 2021 04:17:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9ECD961289 for ; Thu, 15 Jul 2021 04:17:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9ECD961289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 027526B0138; Thu, 15 Jul 2021 00:17:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F40B48D002E; Thu, 15 Jul 2021 00:17:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE1D08D002D; Thu, 15 Jul 2021 00:17:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id BC44F6B0138 for ; Thu, 15 Jul 2021 00:17:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CD845231D9 for ; Thu, 15 Jul 2021 04:17:18 +0000 (UTC) X-FDA: 78363512556.23.757E045 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 840B730001AC for ; Thu, 15 Jul 2021 04:17:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+JoCPLXB2F1DMu/fBl0giEqR198sh4aiyRzllfAD484=; b=Co6TJeAcHCbLh/tHKI+Hr5im1v Kwhp86XNmDeFISaNQc2xsSTZYsbEVnLjkmYZzNkrUgQqNjxHuqUw6Z0w1s+tdSLlix1xHIexyO4V8 XG/Hy2nmOAKsQyXdNjT8IE4HHCeyjl1g1BCS6rMsuvvMsvZvruZ4RSUF3rcQMKl5fWfJZNgpgxsSh /Xpf23SGeBTF4/j+4GbqTX4Fi8IH5D8yqrEiF8PZdWODZbXqK7ylCO0DMdUCMgbxgkNdkGaiO4gqT 0mMxPrjT8VRncvQHz+q5PogLVZXN9wtVsdFh4PZpCa1tJmzyOd/DOGlHN/r3VHyyPTQVdbmGIBmw4 +s8jGvZw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3snE-002waH-Hf; Thu, 15 Jul 2021 04:16:28 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 049/138] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Date: Thu, 15 Jul 2021 04:35:35 +0100 Message-Id: <20210715033704.692967-50-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Co6TJeAc; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: ahfowmzi94n4mz1h8rn6ysi1kcftt74b X-Rspamd-Queue-Id: 840B730001AC X-HE-Tag: 1626322638-795274 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of relock_page_lruvec_irq() and folio_lruvec_relock_irqsave(). Also convert page_matches_lruvec() to folio_matches_lruvec(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 17 ++++++++--------- mm/mlock.c | 3 ++- mm/swap.c | 11 +++++++---- mm/vmscan.c | 5 +++-- 4 files changed, 20 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ffb591920241..6511f89ad454 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1529,19 +1529,19 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, } /* Test requires a stable page->memcg binding, see page_memcg() */ -static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec) +static inline bool folio_matches_lruvec(struct folio *folio, + struct lruvec *lruvec) { - return lruvec_pgdat(lruvec) == page_pgdat(page) && - lruvec_memcg(lruvec) == page_memcg(page); + return lruvec_pgdat(lruvec) == folio_pgdat(folio) && + lruvec_memcg(lruvec) == folio_memcg(folio); } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irq(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, struct lruvec *locked_lruvec) { - struct folio *folio = page_folio(page); if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irq(locked_lruvec); @@ -1551,12 +1551,11 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page, } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, struct lruvec *locked_lruvec, unsigned long *flags) { - struct folio *folio = page_folio(page); if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irqrestore(locked_lruvec, *flags); diff --git a/mm/mlock.c b/mm/mlock.c index 16d2ee160d43..e263d62ae2d0 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -271,6 +271,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) /* Phase 1: page isolation */ for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; + struct folio *folio = page_folio(page); if (TestClearPageMlocked(page)) { /* @@ -278,7 +279,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) * so we can spare the get_page() here. */ if (TestClearPageLRU(page)) { - lruvec = relock_page_lruvec_irq(page, lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); del_page_from_lru_list(page, lruvec); continue; } else diff --git a/mm/swap.c b/mm/swap.c index 6d0d2bfca48e..aa9c32b714c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -211,12 +211,13 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; + struct folio *folio = page_folio(page); /* block memcg migration during page moving between lru */ if (!TestClearPageLRU(page)) continue; - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); (*move_fn)(page, lruvec); SetPageLRU(page); @@ -907,6 +908,7 @@ void release_pages(struct page **pages, int nr) for (i = 0; i < nr; i++) { struct page *page = pages[i]; + struct folio *folio = page_folio(page); /* * Make sure the IRQ-safe lock-holding time does not get @@ -918,7 +920,7 @@ void release_pages(struct page **pages, int nr) lruvec = NULL; } - page = compound_head(page); + page = &folio->page; if (is_huge_zero_page(page)) continue; @@ -957,7 +959,7 @@ void release_pages(struct page **pages, int nr) if (PageLRU(page)) { struct lruvec *prev_lruvec = lruvec; - lruvec = relock_page_lruvec_irqsave(page, lruvec, + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); if (prev_lruvec != lruvec) lock_batch = 0; @@ -1061,8 +1063,9 @@ void __pagevec_lru_add(struct pagevec *pvec) for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; + struct folio *folio = page_folio(page); - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); __pagevec_lru_add_fn(page, lruvec); } if (lruvec) diff --git a/mm/vmscan.c b/mm/vmscan.c index 0d48306d37dc..7a2f25b904d9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2075,7 +2075,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, * All pages were isolated from the same lruvec (and isolation * inhibits memcg migration). */ - VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; @@ -4514,6 +4514,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; + struct folio *folio = page_folio(page); int nr_pages; if (PageTransTail(page)) @@ -4526,7 +4527,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (!TestClearPageLRU(page)) continue; - lruvec = relock_page_lruvec_irq(page, lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); if (page_evictable(page) && PageUnevictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); From patchwork Thu Jul 15 03:35:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3528BC07E96 for ; Thu, 15 Jul 2021 04:18:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C65E261279 for ; Thu, 15 Jul 2021 04:18:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C65E261279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1FB0C6B013A; Thu, 15 Jul 2021 00:18:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D1A48D002E; Thu, 15 Jul 2021 00:18:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 072BC8D002D; Thu, 15 Jul 2021 00:18:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id D80DE6B013A for ; Thu, 15 Jul 2021 00:18:17 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C598A8248047 for ; Thu, 15 Jul 2021 04:18:16 +0000 (UTC) X-FDA: 78363514992.09.D2462CD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 7800A801F262 for ; Thu, 15 Jul 2021 04:18:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VXeGg211zIjtnWWOnpiVKQ/aALQg7bI4I+excloMB0I=; b=BJktQAtc+74uctXV2T+HsZ1Wa8 bThnRegP9VVYw55EYK5BSMzSa1Z23JCJkZGO19mupFWVTxe3xbLLIqh7eAfpKq3WTqWix94nfyDLq ORgGHnnBgdQv5UC3KZFX6v8SdM7Z8yCWIQUFAyzmbqM7my8krSy143JJxcfbYojGj4ojbtR6D5Mf1 0CqLBaa+LtBiA1J8KjTiRJn9imMQhfqOR+NtwsIcuG+q6mOiEjXjvML4qqDXXuTSgrUhrBb9VQUsu 9Y19BWShWbfKId1hGbWIYvnoVLHx+n/E5NXp8NkrWrXLqarmKTynLumu2m2CmAIod9zdUCTA/OELC QOT3kPGg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sni-002wcf-Jt; Thu, 15 Jul 2021 04:16:55 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 050/138] mm/workingset: Convert workingset_activation to take a folio Date: Thu, 15 Jul 2021 04:35:36 +0100 Message-Id: <20210715033704.692967-51-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7800A801F262 X-Stat-Signature: 3gabw7odcphz8fmi4fh31s8fxtekkzsp Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BJktQAtc; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626322696-336761 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function already assumed it was being passed a head page. No real change here, except that thp_nr_pages() compiles away on kernels with THP compiled out while folio_nr_pages() is always present. Also convert page_memcg_rcu() to folio_memcg_rcu(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 18 +++++++++--------- include/linux/swap.h | 2 +- mm/swap.c | 2 +- mm/workingset.c | 11 ++++------- 4 files changed, 15 insertions(+), 18 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6511f89ad454..2dd660185bb3 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -461,19 +461,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* - * page_memcg_rcu - locklessly get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. */ -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(folio->memcg_data); - VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { @@ -1129,7 +1129,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); return NULL; diff --git a/include/linux/swap.h b/include/linux/swap.h index 8394716a002b..989d8f78c256 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -330,7 +330,7 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); void workingset_refault(struct page *page, void *shadow); -void workingset_activation(struct page *page); +void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ void workingset_update_node(struct xa_node *node); diff --git a/mm/swap.c b/mm/swap.c index aa9c32b714c5..85969b36b636 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -451,7 +451,7 @@ void mark_page_accessed(struct page *page) else __lru_cache_activate_page(page); ClearPageReferenced(page); - workingset_activation(page); + workingset_activation(page_folio(page)); } if (page_is_idle(page)) clear_page_idle(page); diff --git a/mm/workingset.c b/mm/workingset.c index e62c0f2084a2..39bb60d50217 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -392,13 +392,11 @@ void workingset_refault(struct page *page, void *shadow) /** * workingset_activation - note a page activation - * @page: page that is being activated + * @folio: Folio that is being activated. */ -void workingset_activation(struct page *page) +void workingset_activation(struct folio *folio) { - struct folio *folio = page_folio(page); struct mem_cgroup *memcg; - struct lruvec *lruvec; rcu_read_lock(); /* @@ -408,11 +406,10 @@ void workingset_activation(struct page *page) * XXX: See workingset_refault() - this should return * root_mem_cgroup even for !CONFIG_MEMCG. */ - memcg = page_memcg_rcu(page); + memcg = folio_memcg_rcu(folio); if (!mem_cgroup_disabled() && !memcg) goto out; - lruvec = folio_lruvec(folio); - workingset_age_nonresident(lruvec, thp_nr_pages(page)); + workingset_age_nonresident(folio_lruvec(folio), folio_nr_pages(folio)); out: rcu_read_unlock(); } From patchwork Thu Jul 15 03:35:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D98AC47E4D for ; Thu, 15 Jul 2021 04:19:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DAF4D61289 for ; Thu, 15 Jul 2021 04:19:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAF4D61289 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 359A38D002D; Thu, 15 Jul 2021 00:19:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3090F6B013D; Thu, 15 Jul 2021 00:19:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15CA68D002D; Thu, 15 Jul 2021 00:19:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id E6FD36B013C for ; Thu, 15 Jul 2021 00:19:03 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D832D1CB36 for ; Thu, 15 Jul 2021 04:19:02 +0000 (UTC) X-FDA: 78363516924.36.B3777B2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 8EE88801F265 for ; Thu, 15 Jul 2021 04:19:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3MhUuhvN5PSzy3FDchiZH89Zo2RpiYRVaWvbf1NSsWY=; b=kjDHlMefUZNhNNaGGwuh7CT9D6 XeO13Tz2B8j9im+ZA4alHATGSiQJ3qFX18jn8B7rk1EnioSI+u17DQqCz5VWO1VbeCaD6DYKozSF9 qwKr3ObX7ohWGqAEvj8eYh/ZhD+2a6XpsUbjon9j0BHFDkGaXEupt25N+N2av0fREmHSiuwtrWK1L 0Y82XUJv0RV35eZmz0c8na6rjWIJmmiUpk6mLjDSqpt9gLYeGRihqIWDUHiCgwqzAycHzJ+K7njOS QFH8BNG9u+g3fGGUcPwmgEjHu69eoCaCiru+2VumGlw4lXWeSaX4LG3B0Lt9KNVEPFZgqSuAFLiXS vZzuixeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3soY-002wg5-TI; Thu, 15 Jul 2021 04:18:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 051/138] mm: Add folio_pfn() Date: Thu, 15 Jul 2021 04:35:37 +0100 Message-Id: <20210715033704.692967-52-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kjDHlMef; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: u556ffdbp6uuktuzgkb9di1xy5nxuo8r X-Rspamd-Queue-Id: 8EE88801F265 X-HE-Tag: 1626322742-326546 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_to_pfn(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index c6e2a1682a6d..89daae93aa9b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1623,6 +1623,20 @@ static inline unsigned long page_to_section(const struct page *page) } #endif +/** + * folio_pfn - Return the Page Frame Number of a folio. + * @folio: The folio. + * + * A folio may contain multiple pages. The pages have consecutive + * Page Frame Numbers. + * + * Return: The Page Frame Number of the first page in the folio. + */ +static inline unsigned long folio_pfn(struct folio *folio) +{ + return page_to_pfn(&folio->page); +} + /* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ #ifdef CONFIG_MIGRATION static inline bool is_pinnable_page(struct page *page) From patchwork Thu Jul 15 03:35:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C5B8C47E48 for ; Thu, 15 Jul 2021 04:19:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E333B6128C for ; Thu, 15 Jul 2021 04:19:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E333B6128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F4DF8D002E; Thu, 15 Jul 2021 00:19:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CB7D6B013E; Thu, 15 Jul 2021 00:19:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 293658D002E; Thu, 15 Jul 2021 00:19:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 05D7C6B013D for ; Thu, 15 Jul 2021 00:19:48 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E7D30182056CC for ; Thu, 15 Jul 2021 04:19:47 +0000 (UTC) X-FDA: 78363518814.22.7DCDB82 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 9E780B00029D for ; Thu, 15 Jul 2021 04:19:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TCeIycJ0VBmfdDjFJis4rziunkk2jYfcqWr47jM0A6w=; b=hX+LWden5mfIQEpU4c1BH7LtcA TwaU/4kB/cftZXp3G4h+JkzAY0mtabv/0MVUUEDj5zz4wis+cAGecPoIsmA7ETm/a4dc6FSBeiwgC 86Pk1sr18zn5gSbTKZLro0He5niwpR/PKwe5E01clCTD32MQywfOoqGJP2fe5XscnbXVssn7P1g4G 518vzLvh8/z9HtxRpauIzXpPtMT0pyyU17MKOfnRf6ABtAD6BNgrxdteQsRDWU+Lpw1mlPRvJjXWx 0ZKDReH4lZplNBdwPHFqm76SmM7KSP2pzSIEJiMK1OVBEQVvjkElPJIhOJIgHkQgBtiJ6L1R8kIkg wIZtk6QA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3spO-002wkP-1W; Thu, 15 Jul 2021 04:18:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 052/138] mm: Add folio_raw_mapping() Date: Thu, 15 Jul 2021 04:35:38 +0100 Message-Id: <20210715033704.692967-53-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9E780B00029D X-Stat-Signature: 4bqkxypzdhpgqfw5gsqm7pwuf87qdi4z Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hX+LWden; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626322787-770278 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __page_rmapping to folio_raw_mapping and move it to mm/internal.h. It's only a couple of instructions (load and mask), so it's definitely going to be cheaper to inline it than call it. Leave page_rmapping out of line. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/internal.h | 7 +++++++ mm/util.c | 20 ++++---------------- 2 files changed, 11 insertions(+), 16 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 1a8851b73031..fa31a7f0ed79 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -34,6 +34,13 @@ void page_writeback_init(void); +static inline void *folio_raw_mapping(struct folio *folio) +{ + unsigned long mapping = (unsigned long)folio->mapping; + + return (void *)(mapping & ~PAGE_MAPPING_FLAGS); +} + vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); diff --git a/mm/util.c b/mm/util.c index e8c12350b3eb..d0aa1d9c811e 100644 --- a/mm/util.c +++ b/mm/util.c @@ -635,21 +635,10 @@ void kvfree_sensitive(const void *addr, size_t len) } EXPORT_SYMBOL(kvfree_sensitive); -static inline void *__page_rmapping(struct page *page) -{ - unsigned long mapping; - - mapping = (unsigned long)page->mapping; - mapping &= ~PAGE_MAPPING_FLAGS; - - return (void *)mapping; -} - /* Neutral page->mapping pointer to address_space or anon_vma or other */ void *page_rmapping(struct page *page) { - page = compound_head(page); - return __page_rmapping(page); + return folio_raw_mapping(page_folio(page)); } /** @@ -680,13 +669,12 @@ EXPORT_SYMBOL(folio_mapped); struct anon_vma *page_anon_vma(struct page *page) { - unsigned long mapping; + struct folio *folio = page_folio(page); + unsigned long mapping = (unsigned long)folio->mapping; - page = compound_head(page); - mapping = (unsigned long)page->mapping; if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) return NULL; - return __page_rmapping(page); + return (void *)(mapping - PAGE_MAPPING_ANON); } /** From patchwork Thu Jul 15 03:35:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B863C07E96 for ; Thu, 15 Jul 2021 04:20:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A27D61103 for ; Thu, 15 Jul 2021 04:20:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A27D61103 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8AE518D002F; Thu, 15 Jul 2021 00:20:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 85E186B013F; Thu, 15 Jul 2021 00:20:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FF9D8D002F; Thu, 15 Jul 2021 00:20:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 457336B013E for ; Thu, 15 Jul 2021 00:20:42 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3D13018495707 for ; Thu, 15 Jul 2021 04:20:41 +0000 (UTC) X-FDA: 78363521082.22.427A42E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id F2BF2F000221 for ; Thu, 15 Jul 2021 04:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Jt6GXjtGu7UeS5NRtWk7x4+slFhDyCi3h/+FjjmIbRg=; b=Dbuvzif2RWG0LtVz3I6BDxz9aG 1hGFxtFgiZlNP08SN8Z5UjexXaSzax9uUzZKFWIC8r+5qqxRD6CQ/BYfbYU4bIZ8nq+XNRRSTtxvO TSowEK3/lzsSAvq87gxKhh9Dr/PcO4cq7Owujr7T8IPOFpVcbluAXS+SgP6dGgT7uYhDS6roWayI5 qi+nbnAc92yjyNRewvEWtrblAMNa10rokVpNBdFCRelFe2/hkEovzf1xjPdiY6VwgyiQE55FDZDHa ++05iRoZk4no4YbsDrJZm4ens5bxa1LGTkfMRH3k3Cn6sBLidXETjGhinNgIhLwSln8r7vWVarlp6 b6vd7ndA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sq3-002woR-Ke; Thu, 15 Jul 2021 04:19:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 053/138] mm: Add flush_dcache_folio() Date: Thu, 15 Jul 2021 04:35:39 +0100 Message-Id: <20210715033704.692967-54-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dbuvzif2; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: o5k15p6jg73cie9iw8mdfpum81tm7on3 X-Rspamd-Queue-Id: F2BF2F000221 X-HE-Tag: 1626322840-539172 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a default implementation which calls flush_dcache_page() on each page in the folio. If architectures can do better, they should implement their own version of it. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka --- Documentation/core-api/cachetlb.rst | 6 ++++++ arch/nds32/include/asm/cacheflush.h | 1 + include/asm-generic/cacheflush.h | 6 ++++++ mm/util.c | 13 +++++++++++++ 4 files changed, 26 insertions(+) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index fe4290e26729..29682f69a915 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -325,6 +325,12 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. + ``void flush_dcache_folio(struct folio *folio)`` + This function is called under the same circumstances as + flush_dcache_page(). It allows the architecture to + optimise for flushing the entire folio of pages instead + of flushing one page at a time. + ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/arch/nds32/include/asm/cacheflush.h b/arch/nds32/include/asm/cacheflush.h index 7d6824f7c0e8..f10d13af4ae5 100644 --- a/arch/nds32/include/asm/cacheflush.h +++ b/arch/nds32/include/asm/cacheflush.h @@ -38,6 +38,7 @@ void flush_anon_page(struct vm_area_struct *vma, #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE void flush_kernel_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); void flush_kernel_vmap_range(void *addr, int size); void invalidate_kernel_vmap_range(void *addr, int size); #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&(mapping)->i_pages) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 4a674db4e1fa..fedc0dfa4877 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -49,9 +49,15 @@ static inline void flush_cache_page(struct vm_area_struct *vma, static inline void flush_dcache_page(struct page *page) { } + +static inline void flush_dcache_folio(struct folio *folio) { } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 +#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO #endif +#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +void flush_dcache_folio(struct folio *folio); +#endif #ifndef flush_dcache_mmap_lock static inline void flush_dcache_mmap_lock(struct address_space *mapping) diff --git a/mm/util.c b/mm/util.c index d0aa1d9c811e..149537120a91 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1057,3 +1057,16 @@ void page_offline_end(void) up_write(&page_offline_rwsem); } EXPORT_SYMBOL(page_offline_end); + +#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +void flush_dcache_folio(struct folio *folio) +{ + unsigned int n = folio_nr_pages(folio); + + do { + n--; + flush_dcache_page(folio_page(folio, n)); + } while (n); +} +EXPORT_SYMBOL(flush_dcache_folio); +#endif From patchwork Thu Jul 15 03:35:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F3C0C07E96 for ; Thu, 15 Jul 2021 04:21:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 073CF61362 for ; Thu, 15 Jul 2021 04:21:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 073CF61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5799D8D0030; Thu, 15 Jul 2021 00:21:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5288C6B0140; Thu, 15 Jul 2021 00:21:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C93F8D0030; Thu, 15 Jul 2021 00:21:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 11F706B013F for ; Thu, 15 Jul 2021 00:21:40 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 04615184875A2 for ; Thu, 15 Jul 2021 04:21:39 +0000 (UTC) X-FDA: 78363523518.40.07DB305 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id B2A4A5012D66 for ; Thu, 15 Jul 2021 04:21:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Q3P0tZNKkQBQCHQAnhrC3AbA7cH4qgibd4ubfHlUHLI=; b=H1WUkZDnuEVn52N3PeJ5KHdbnI ZQs7zReFPfjsUPm84hoc2OPUfZ19QDY6I49iYtb5+SJrH7e7Dc1mBrK97fDcZz+3/seAnmq57/DlW zfxG15tbdcsg/hUi133GL4aStb3wFjzrSv9rnpXtKJ4o4K5OEfKH2DqjoRl8NmOoiyTAsnaJ1W2+k Lv5p9RBANY6zM2ZfS7nhJB3ftKf78+RQoCqJAz+hc9zns6f5XdXHv3pXRDhhoCT+3ceOmGFH7hi4o FebvVsv9GutoCdncbnAdVUlungJiy9tLUtAfmtjbiZnesUoxauSHCN0KQad64tre21NSd6JgDNQ/U 55uZGxxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sqz-002wvL-Nc; Thu, 15 Jul 2021 04:20:28 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 054/138] mm: Add kmap_local_folio() Date: Thu, 15 Jul 2021 04:35:40 +0100 Message-Id: <20210715033704.692967-55-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B2A4A5012D66 X-Stat-Signature: bmpn5xxw5du19oax4uioeys89tyw7yqu Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=H1WUkZDn; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626322898-973804 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This allows us to map a portion of a folio. Callers can only expect to access up to the next page boundary. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/highmem-internal.h | 11 +++++++++ include/linux/highmem.h | 38 ++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 7902c7d8b55f..d5d6f930ae1d 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -73,6 +73,12 @@ static inline void *kmap_local_page(struct page *page) return __kmap_local_page_prot(page, kmap_prot); } +static inline void *kmap_local_folio(struct folio *folio, size_t offset) +{ + struct page *page = folio_page(folio, offset / PAGE_SIZE); + return __kmap_local_page_prot(page, kmap_prot) + offset % PAGE_SIZE; +} + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) { return __kmap_local_page_prot(page, prot); @@ -160,6 +166,11 @@ static inline void *kmap_local_page(struct page *page) return page_address(page); } +static inline void *kmap_local_folio(struct folio *folio, size_t offset) +{ + return page_address(&folio->page) + offset; +} + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) { return kmap_local_page(page); diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 8c6e8e996c87..85de3bd0b47d 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -96,6 +96,44 @@ static inline void kmap_flush_unused(void); */ static inline void *kmap_local_page(struct page *page); +/** + * kmap_local_folio - Map a page in this folio for temporary usage + * @folio: The folio to be mapped. + * @offset: The byte offset within the folio. + * + * Returns: The virtual address of the mapping + * + * Can be invoked from any context. + * + * Requires careful handling when nesting multiple mappings because the map + * management is stack based. The unmap has to be in the reverse order of + * the map operation: + * + * addr1 = kmap_local_folio(page1, offset1); + * addr2 = kmap_local_folio(page2, offset2); + * ... + * kunmap_local(addr2); + * kunmap_local(addr1); + * + * Unmapping addr1 before addr2 is invalid and causes malfunction. + * + * Contrary to kmap() mappings the mapping is only valid in the context of + * the caller and cannot be handed to other contexts. + * + * On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the + * virtual address of the direct mapping. Only real highmem pages are + * temporarily mapped. + * + * While it is significantly faster than kmap() for the higmem case it + * comes with restrictions about the pointer validity. Only use when really + * necessary. + * + * On HIGHMEM enabled systems mapping a highmem page has the side effect of + * disabling migration in order to keep the virtual address stable across + * preemption. No caller of kmap_local_folio() can rely on this side effect. + */ +static inline void *kmap_local_folio(struct folio *folio, size_t offset); + /** * kmap_atomic - Atomically map a page for temporary usage - Deprecated! * @page: Pointer to the page to be mapped From patchwork Thu Jul 15 03:35:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77201C47E48 for ; Thu, 15 Jul 2021 04:22:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1F3D26128C for ; Thu, 15 Jul 2021 04:22:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F3D26128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6FDBA8D0031; Thu, 15 Jul 2021 00:22:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D4D16B0141; Thu, 15 Jul 2021 00:22:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 575CF8D0031; Thu, 15 Jul 2021 00:22:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 33D786B0140 for ; Thu, 15 Jul 2021 00:22:50 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 26B542202D for ; Thu, 15 Jul 2021 04:22:49 +0000 (UTC) X-FDA: 78363526458.35.978E573 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id D6271D008A7B for ; Thu, 15 Jul 2021 04:22:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7wUoKfBZfariJ0vsqhKDjrMepYyjGa6tL5Bdrvqggxg=; b=Gb703oXAZQ0yFbSal2W7lQiEbF vFbOMCu/dBI13V/4/enRicX/7MdlrJgcZm7F7doyr7R1dtoSQgQetZSqxmBpsQy2pejNVb20AMAoz y5KJFUDB8pjmOlc/YcMnGf4TBL1Pi0lwMw+igiV6B0LwjHBoeFJDt51lJdj2gCHrBi2GiJMsOyb31 E1wJVpA7LWXc91eENoA4SAjt4ZEWtrpLuNDDyoB+fvsd8Cb2KgSZMDdz4fYlDMf2jFgEH3ILX6Xdh 6YBW5eOL52ojpR7NWdWH9StiHskkF5VJXEIFxM/SMFpi7JHYLPfaAPStPcXFVQJdxzo8bFdl2QDta hxC3d8GQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3srh-002wz6-0F; Thu, 15 Jul 2021 04:21:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 055/138] mm: Add arch_make_folio_accessible() Date: Thu, 15 Jul 2021 04:35:41 +0100 Message-Id: <20210715033704.692967-56-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Gb703oXA; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: kk7n4zmhdn1ckkkmfc9esgji1zm9m7ty X-Rspamd-Queue-Id: D6271D008A7B X-HE-Tag: 1626322968-938361 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As a default implementation, call arch_make_page_accessible n times. If an architecture can do better, it can override this. Also move the default implementation of arch_make_page_accessible() from gfp.h to mm.h. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka --- include/linux/gfp.h | 6 ------ include/linux/mm.h | 21 +++++++++++++++++++++ 2 files changed, 21 insertions(+), 6 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..dc5ff40608ce 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -520,12 +520,6 @@ static inline void arch_free_page(struct page *page, int order) { } #ifndef HAVE_ARCH_ALLOC_PAGE static inline void arch_alloc_page(struct page *page, int order) { } #endif -#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE -static inline int arch_make_page_accessible(struct page *page) -{ - return 0; -} -#endif struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); diff --git a/include/linux/mm.h b/include/linux/mm.h index 89daae93aa9b..deb0f5efaa65 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1732,6 +1732,27 @@ static inline size_t folio_size(struct folio *folio) return PAGE_SIZE << folio_order(folio); } +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE +static inline int arch_make_page_accessible(struct page *page) +{ + return 0; +} +#endif + +#ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE +static inline int arch_make_folio_accessible(struct folio *folio) +{ + int ret, i; + for (i = 0; i < folio_nr_pages(folio); i++) { + ret = arch_make_page_accessible(folio_page(folio, i)); + if (ret) + break; + } + + return ret; +} +#endif + /* * Some inline functions in vmstat.h depend on page_zone() */ From patchwork Thu Jul 15 03:35:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F1EFC1B08C for ; Thu, 15 Jul 2021 04:23:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9BD76128C for ; Thu, 15 Jul 2021 04:23:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9BD76128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 180248D0032; Thu, 15 Jul 2021 00:23:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 156036B0142; Thu, 15 Jul 2021 00:23:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEAB28D0032; Thu, 15 Jul 2021 00:23:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id C6E9E6B0141 for ; Thu, 15 Jul 2021 00:23:28 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DE0A31814AC98 for ; Thu, 15 Jul 2021 04:23:26 +0000 (UTC) X-FDA: 78363528012.29.876EE2A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 87A675012D54 for ; Thu, 15 Jul 2021 04:23:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YnFXoc4cznHKZojXibwmEBdTqWwEducieNxXhTy+6a0=; b=Vx2LFn70j+ZE9xQFNCIozoCfaf oj8UHwfaRRi7XD6OhWZQLuS5/7aCDS3gOx5wMZM50ikbZa8ojQ9CNS6pFL7/qnK4DLWNBBX0Gpse7 AF8iMZ6+O272jOVr3JNMTCboVEdfwUtcW5iE/C9nbdkguiWw7GOFVyETDp2Sqc250gP6HpyG7N6fi 6+cHZ3frOi6dhfF4aaMDJdvpU8cWgY7uCT6ZbNyfmXWuBNX5agJsOY7lV06AMKFyqUVcyj8OjhpGF CBvoHNAU8kPDb/ZyP17VMzLWw1Q8NDzvzSsUV2pcQ5YvT+mKQPSlPS4BO28PjePwuLXYnaID8TTqT 9D5imLrg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3ssm-002x3F-3H; Thu, 15 Jul 2021 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig Subject: [PATCH v14 056/138] mm: Add folio_young and folio_idle Date: Thu, 15 Jul 2021 04:35:42 +0100 Message-Id: <20210715033704.692967-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Vx2LFn70; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: et7u3qgsgibmsg6ttsi1u83dfu4s4yhk X-Rspamd-Queue-Id: 87A675012D54 X-HE-Tag: 1626323006-584018 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Idle page tracking is handled through page_ext on 32-bit architectures. Add folio equivalents for 32-bit and move all the page compatibility parts to common code. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig Reviewed-by: David Howells --- include/linux/page_idle.h | 99 +++++++++++++++++++-------------------- 1 file changed, 49 insertions(+), 50 deletions(-) diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h index 1e894d34bdce..1bcb1365b1d0 100644 --- a/include/linux/page_idle.h +++ b/include/linux/page_idle.h @@ -8,46 +8,16 @@ #ifdef CONFIG_IDLE_PAGE_TRACKING -#ifdef CONFIG_64BIT -static inline bool page_is_young(struct page *page) -{ - return PageYoung(page); -} - -static inline void set_page_young(struct page *page) -{ - SetPageYoung(page); -} - -static inline bool test_and_clear_page_young(struct page *page) -{ - return TestClearPageYoung(page); -} - -static inline bool page_is_idle(struct page *page) -{ - return PageIdle(page); -} - -static inline void set_page_idle(struct page *page) -{ - SetPageIdle(page); -} - -static inline void clear_page_idle(struct page *page) -{ - ClearPageIdle(page); -} -#else /* !CONFIG_64BIT */ +#ifndef CONFIG_64BIT /* * If there is not enough space to store Idle and Young bits in page flags, use * page ext flags instead. */ extern struct page_ext_operations page_idle_ops; -static inline bool page_is_young(struct page *page) +static inline bool folio_test_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -55,9 +25,9 @@ static inline bool page_is_young(struct page *page) return test_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline void set_page_young(struct page *page) +static inline void folio_set_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -65,9 +35,9 @@ static inline void set_page_young(struct page *page) set_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -75,9 +45,9 @@ static inline bool test_and_clear_page_young(struct page *page) return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool page_is_idle(struct page *page) +static inline bool folio_test_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -85,9 +55,9 @@ static inline bool page_is_idle(struct page *page) return test_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -95,46 +65,75 @@ static inline void set_page_idle(struct page *page) set_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; clear_bit(PAGE_EXT_IDLE, &page_ext->flags); } -#endif /* CONFIG_64BIT */ +#endif /* !CONFIG_64BIT */ #else /* !CONFIG_IDLE_PAGE_TRACKING */ -static inline bool page_is_young(struct page *page) +static inline bool folio_test_young(struct folio *folio) { return false; } -static inline void set_page_young(struct page *page) +static inline void folio_set_young(struct folio *folio) { } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young(struct folio *folio) { return false; } -static inline bool page_is_idle(struct page *page) +static inline bool folio_test_idle(struct folio *folio) { return false; } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle(struct folio *folio) { } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle(struct folio *folio) { } #endif /* CONFIG_IDLE_PAGE_TRACKING */ +static inline bool page_is_young(struct page *page) +{ + return folio_test_young(page_folio(page)); +} + +static inline void set_page_young(struct page *page) +{ + folio_set_young(page_folio(page)); +} + +static inline bool test_and_clear_page_young(struct page *page) +{ + return folio_test_clear_young(page_folio(page)); +} + +static inline bool page_is_idle(struct page *page) +{ + return folio_test_idle(page_folio(page)); +} + +static inline void set_page_idle(struct page *page) +{ + folio_set_idle(page_folio(page)); +} + +static inline void clear_page_idle(struct page *page) +{ + folio_clear_idle(page_folio(page)); +} #endif /* _LINUX_MM_PAGE_IDLE_H */ From patchwork Thu Jul 15 03:35:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3E7AC07E96 for ; Thu, 15 Jul 2021 04:25:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57967610A7 for ; Thu, 15 Jul 2021 04:25:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57967610A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACB178D0034; Thu, 15 Jul 2021 00:25:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA24B6B0144; Thu, 15 Jul 2021 00:25:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 943168D0034; Thu, 15 Jul 2021 00:25:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 6E7166B0143 for ; Thu, 15 Jul 2021 00:25:10 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 607498248047 for ; Thu, 15 Jul 2021 04:25:09 +0000 (UTC) X-FDA: 78363532338.36.C6F48FA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 023437001711 for ; Thu, 15 Jul 2021 04:24:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FsEzydEejG+qhxdXn0JDq99IGXxuSEgNM7cW1AjCAGU=; b=akCCiCCTGAKqrCD9A0Vfe63Ni/ OKuFuIiFrW5HOM6Mghkqzoj8LITo7D96RBf3s/ipTx50kAfRhJhM5jOZXpaF1uh6HGJyibLCT1OwI dubiBX7ELbnTzyTxX3CPmttgHab9J02PuJcuDwRdKZ5c7q+zDPCTNvA+XffY0RT0lkiitS4RjV6Iq sf/Xg0hwo8/XWqSufa+xCxL1qL+IobX4qfTeYDUL5fSSoW6j+CdgpZapKNSnsWm+T33exfdRmTd+4 0xqGTcRvZoEjzq1g+pzQm5U1ggsq2hgZ1TKam8mjt35EwlAdrSn/JzIxFLhj/9979Rp4/NChIr5IW lJTOnBqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3stl-002x6a-2Q; Thu, 15 Jul 2021 04:23:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 057/138] mm/swap: Add folio_activate() Date: Thu, 15 Jul 2021 04:35:43 +0100 Message-Id: <20210715033704.692967-58-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=akCCiCCT; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 8ngijeiyfyxn81n9ot4ppdqqcejj9whx X-Rspamd-Queue-Id: 023437001711 X-HE-Tag: 1626323048-957659 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This replaces activate_page() and eliminates lots of calls to compound_head(). Saves net 118 bytes of kernel text. There are still some redundant calls to page_folio() here which will be removed when pagevec_lru_move_fn() is converted to use folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/trace/events/pagemap.h | 14 +++++------- mm/swap.c | 41 ++++++++++++++++++---------------- 2 files changed, 28 insertions(+), 27 deletions(-) diff --git a/include/trace/events/pagemap.h b/include/trace/events/pagemap.h index 92ad176210ff..1fd0185d66e8 100644 --- a/include/trace/events/pagemap.h +++ b/include/trace/events/pagemap.h @@ -60,23 +60,21 @@ TRACE_EVENT(mm_lru_insertion, TRACE_EVENT(mm_lru_activate, - TP_PROTO(struct page *page), + TP_PROTO(struct folio *folio), - TP_ARGS(page), + TP_ARGS(folio), TP_STRUCT__entry( - __field(struct page *, page ) + __field(struct folio *, folio ) __field(unsigned long, pfn ) ), TP_fast_assign( - __entry->page = page; - __entry->pfn = page_to_pfn(page); + __entry->folio = folio; + __entry->pfn = folio_pfn(folio); ), - /* Flag format is based on page-types.c formatting for pagemap */ - TP_printk("page=%p pfn=0x%lx", __entry->page, __entry->pfn) - + TP_printk("folio=%p pfn=0x%lx", __entry->folio, __entry->pfn) ); #endif /* _TRACE_PAGEMAP_H */ diff --git a/mm/swap.c b/mm/swap.c index 85969b36b636..c3137e4e1cd8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -322,15 +322,15 @@ void lru_note_cost_page(struct page *page) page_is_file_lru(page), thp_nr_pages(page)); } -static void __activate_page(struct page *page, struct lruvec *lruvec) +static void __folio_activate(struct folio *folio, struct lruvec *lruvec) { - if (!PageActive(page) && !PageUnevictable(page)) { - int nr_pages = thp_nr_pages(page); + if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + int nr_pages = folio_nr_pages(folio); - del_page_from_lru_list(page, lruvec); - SetPageActive(page); - add_page_to_lru_list(page, lruvec); - trace_mm_lru_activate(page); + lruvec_del_folio(lruvec, folio); + folio_set_active(folio); + lruvec_add_folio(lruvec, folio); + trace_mm_lru_activate(folio); __count_vm_events(PGACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, @@ -339,6 +339,11 @@ static void __activate_page(struct page *page, struct lruvec *lruvec) } #ifdef CONFIG_SMP +static void __activate_page(struct page *page, struct lruvec *lruvec) +{ + return __folio_activate(page_folio(page), lruvec); +} + static void activate_page_drain(int cpu) { struct pagevec *pvec = &per_cpu(lru_pvecs.activate_page, cpu); @@ -352,16 +357,16 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0; } -static void activate_page(struct page *page) +static void folio_activate(struct folio *folio) { - page = compound_head(page); - if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { struct pagevec *pvec; + folio_get(folio); local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.activate_page); - get_page(page); - if (pagevec_add_and_need_flush(pvec, page)) + if (pagevec_add_and_need_flush(pvec, &folio->page)) pagevec_lru_move_fn(pvec, __activate_page); local_unlock(&lru_pvecs.lock); } @@ -372,17 +377,15 @@ static inline void activate_page_drain(int cpu) { } -static void activate_page(struct page *page) +static void folio_activate(struct folio *folio) { - struct folio *folio = page_folio(page); struct lruvec *lruvec; - page = &folio->page; - if (TestClearPageLRU(page)) { + if (folio_test_clear_lru(folio)) { lruvec = folio_lruvec_lock_irq(folio); - __activate_page(page, lruvec); + __folio_activate(folio, lruvec); unlock_page_lruvec_irq(lruvec); - SetPageLRU(page); + folio_set_lru(folio); } } #endif @@ -447,7 +450,7 @@ void mark_page_accessed(struct page *page) * LRU on the next drain. */ if (PageLRU(page)) - activate_page(page); + folio_activate(page_folio(page)); else __lru_cache_activate_page(page); ClearPageReferenced(page); From patchwork Thu Jul 15 03:35:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59A12C47E48 for ; Thu, 15 Jul 2021 04:24:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 03D7F610A7 for ; Thu, 15 Jul 2021 04:24:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03D7F610A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 48AA28D0033; Thu, 15 Jul 2021 00:24:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 461966B0143; Thu, 15 Jul 2021 00:24:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 302AE8D0033; Thu, 15 Jul 2021 00:24:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 0CF4A6B0142 for ; Thu, 15 Jul 2021 00:24:59 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DD3E918488BF2 for ; Thu, 15 Jul 2021 04:24:57 +0000 (UTC) X-FDA: 78363531834.32.94636B1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 99204F000224 for ; Thu, 15 Jul 2021 04:24:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=InONYvAMt7FtRQ5+AzVzta7XRdKuWUeYcNdU4eMCT0w=; b=hXJ/PegcZoOxuAW9utPLRkzeof m20qtE3DYxoLyxIIecYoLC8TsCuowTmtVVzOrKEnIe+ZOcgSdn2fdeRcUDzlAotQ9x6Aq6O9NSoFn Ilx2q0JjxeURWXPVp6kHYxiWI6s7aQv4xw7Y/MuwMn62pYWH5GhFCrin/JUoqIxzT6D9q5hdeGdvk PeQPF1ADZL4e3vDk0TGVFk+wFJ3SkPTni8mczzmoTCq2zQ5c/WMaCY/fu/QosIGKxqvovEHcWlMFT cZ/c/6JsQih74WjAS6or9Tsp4xY2GtbHdAsn9bEGEGXObQpsmHTIWCqdV8CbYtM9Ow0rIIO9uPdIP G97WWxug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3suM-002x8u-Tg; Thu, 15 Jul 2021 04:23:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 058/138] mm/swap: Add folio_mark_accessed() Date: Thu, 15 Jul 2021 04:35:44 +0100 Message-Id: <20210715033704.692967-59-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="hXJ/Pegc"; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: ybjie6s9aeux446x57bi66188jgz17zb X-Rspamd-Queue-Id: 99204F000224 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323097-376180 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert mark_page_accessed() to folio_mark_accessed(). It already operated on the entire compound page, but now we can avoid calling compound_head quite so many times. Shrinks the function from 424 bytes to 295 bytes (shrinking by 129 bytes). The compatibility wrapper is 30 bytes, plus the 8 bytes for the exported symbol means the kernel shrinks by 91 bytes. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka --- include/linux/swap.h | 3 ++- mm/folio-compat.c | 7 +++++++ mm/swap.c | 34 ++++++++++++++++------------------ 3 files changed, 25 insertions(+), 19 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 989d8f78c256..c7a4c0a5863d 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -352,7 +352,8 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages); extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); -extern void mark_page_accessed(struct page *); +void mark_page_accessed(struct page *); +void folio_mark_accessed(struct folio *); extern atomic_t lru_disable_count; diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 7044fcc8a8aa..a374747ae1c6 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -5,6 +5,7 @@ */ #include +#include struct address_space *page_mapping(struct page *page) { @@ -41,3 +42,9 @@ bool page_mapped(struct page *page) return folio_mapped(page_folio(page)); } EXPORT_SYMBOL(page_mapped); + +void mark_page_accessed(struct page *page) +{ + folio_mark_accessed(page_folio(page)); +} +EXPORT_SYMBOL(mark_page_accessed); diff --git a/mm/swap.c b/mm/swap.c index c3137e4e1cd8..d32007fe23b3 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -390,7 +390,7 @@ static void folio_activate(struct folio *folio) } #endif -static void __lru_cache_activate_page(struct page *page) +static void __lru_cache_activate_folio(struct folio *folio) { struct pagevec *pvec; int i; @@ -411,8 +411,8 @@ static void __lru_cache_activate_page(struct page *page) for (i = pagevec_count(pvec) - 1; i >= 0; i--) { struct page *pagevec_page = pvec->pages[i]; - if (pagevec_page == page) { - SetPageActive(page); + if (pagevec_page == &folio->page) { + folio_set_active(folio); break; } } @@ -430,36 +430,34 @@ static void __lru_cache_activate_page(struct page *page) * When a newly allocated page is not yet visible, so safe for non-atomic ops, * __SetPageReferenced(page) may be substituted for mark_page_accessed(page). */ -void mark_page_accessed(struct page *page) +void folio_mark_accessed(struct folio *folio) { - page = compound_head(page); - - if (!PageReferenced(page)) { - SetPageReferenced(page); - } else if (PageUnevictable(page)) { + if (!folio_test_referenced(folio)) { + folio_set_referenced(folio); + } else if (folio_test_unevictable(folio)) { /* * Unevictable pages are on the "LRU_UNEVICTABLE" list. But, * this list is never rotated or maintained, so marking an * evictable page accessed has no effect. */ - } else if (!PageActive(page)) { + } else if (!folio_test_active(folio)) { /* * If the page is on the LRU, queue it for activation via * lru_pvecs.activate_page. Otherwise, assume the page is on a * pagevec, mark it active and it'll be moved to the active * LRU on the next drain. */ - if (PageLRU(page)) - folio_activate(page_folio(page)); + if (folio_test_lru(folio)) + folio_activate(folio); else - __lru_cache_activate_page(page); - ClearPageReferenced(page); - workingset_activation(page_folio(page)); + __lru_cache_activate_folio(folio); + folio_clear_referenced(folio); + workingset_activation(folio); } - if (page_is_idle(page)) - clear_page_idle(page); + if (folio_test_idle(folio)) + folio_clear_idle(folio); } -EXPORT_SYMBOL(mark_page_accessed); +EXPORT_SYMBOL(folio_mark_accessed); /** * lru_cache_add - add a page to a page list From patchwork Thu Jul 15 03:35:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91C2DC47E48 for ; Thu, 15 Jul 2021 04:25:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 434316128C for ; Thu, 15 Jul 2021 04:25:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 434316128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96AA08D0035; Thu, 15 Jul 2021 00:25:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 941B96B0145; Thu, 15 Jul 2021 00:25:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 809648D0035; Thu, 15 Jul 2021 00:25:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 5689F6B0144 for ; Thu, 15 Jul 2021 00:25:57 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4DD281359C for ; Thu, 15 Jul 2021 04:25:56 +0000 (UTC) X-FDA: 78363534312.14.2853660 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id EECD260019B3 for ; Thu, 15 Jul 2021 04:25:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=27gdq5o/ud7Z3GaZ8fB4o0Zc5uDnZZui0TcU+yq7Hzs=; b=hPDSWQ0uzpBbCli4cWGiX9a4/A 0ZCHMmTy2SwlZGWiApAY4AEchvaq4cErG3kVhB/bjWp2MPLC5PEz+o3SxrLipF7LyzCloryCSnN9O l+jOf+4+GIxC3dqMMq1mrYUzkDAia67LIEHYk1JApbgKa3RWA30YAOh/rqXrbe0wUvcVBVxS+9/Dv zJGKAjAGlISnMFYIMXrwq1HI+lVQCVFzR5MuhDAJQsQn8c42dj7Q45Fd7UZY1lnMBEXL7OlpD1D6r j720hhMYcwb/TBs+C4cZDZFyFR+73lH4jcSvPNVr3Wu6HRhFodBuewA06wvoHorNz8Reli8WSFftB eEO56s6w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sv0-002xB8-A9; Thu, 15 Jul 2021 04:24:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 059/138] mm/rmap: Add folio_mkclean() Date: Thu, 15 Jul 2021 04:35:45 +0100 Message-Id: <20210715033704.692967-60-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hPDSWQ0u; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EECD260019B3 X-Stat-Signature: djm775i8gdcha9nooamrfcff6agx8ix9 X-HE-Tag: 1626323155-532437 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Transform page_mkclean() into folio_mkclean() and add a page_mkclean() wrapper around folio_mkclean(). folio_mkclean is 15 bytes smaller than page_mkclean, but the kernel is enlarged by 33 bytes due to inlining page_folio() into each caller. This will go away once the callers are converted to use folio_mkclean(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/rmap.h | 10 ++++++---- mm/rmap.c | 12 ++++++------ 2 files changed, 12 insertions(+), 10 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 83fb86133fe1..d45584310cde 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -235,7 +235,7 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); * * returns the number of cleaned PTEs. */ -int page_mkclean(struct page *); +int folio_mkclean(struct folio *); /* * called in munlock()/munmap() path to check for other vmas holding @@ -293,12 +293,14 @@ static inline int page_referenced(struct page *page, int is_locked, #define try_to_unmap(page, refs) false -static inline int page_mkclean(struct page *page) +static inline int folio_mkclean(struct folio *folio) { return 0; } - - #endif /* CONFIG_MMU */ +static inline int page_mkclean(struct page *page) +{ + return folio_mkclean(page_folio(page)); +} #endif /* _LINUX_RMAP_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 1df8683c4c4c..b3aae8eeaeaf 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -980,7 +980,7 @@ static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg) return true; } -int page_mkclean(struct page *page) +int folio_mkclean(struct folio *folio) { int cleaned = 0; struct address_space *mapping; @@ -990,20 +990,20 @@ int page_mkclean(struct page *page) .invalid_vma = invalid_mkclean_vma, }; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); - if (!page_mapped(page)) + if (!folio_mapped(folio)) return 0; - mapping = page_mapping(page); + mapping = folio_mapping(folio); if (!mapping) return 0; - rmap_walk(page, &rwc); + rmap_walk(&folio->page, &rwc); return cleaned; } -EXPORT_SYMBOL_GPL(page_mkclean); +EXPORT_SYMBOL_GPL(folio_mkclean); /** * page_move_anon_rmap - move a page to our anon_vma From patchwork Thu Jul 15 03:35:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04746C07E96 for ; Thu, 15 Jul 2021 04:26:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABF1461360 for ; Thu, 15 Jul 2021 04:26:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABF1461360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B76F6B0146; Thu, 15 Jul 2021 00:26:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 068AC8D0038; Thu, 15 Jul 2021 00:26:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4ABD8D0037; Thu, 15 Jul 2021 00:26:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id C0D796B0146 for ; Thu, 15 Jul 2021 00:26:23 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A5694184B2F33 for ; Thu, 15 Jul 2021 04:26:22 +0000 (UTC) X-FDA: 78363535404.36.07F4D37 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 213AFF000136 for ; Thu, 15 Jul 2021 04:26:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ChYVUbanl1dWdHHSfuu2vnPL84wRuFBuMaPbMkia3DU=; b=TQ/19MT57558LuafMWbRXyWSjH Bz/42v2uINvTGTk7cQiM8wncugt6XzyCnV1yJ0jc9djX8ZubB5OzTOVxrD4sg1Od0h0rYLIJGhXwQ iO7zspuRMU6rFwe56wfQfyRhtTEFp1wsA+r/g90cIyjs2GDDoNmXiaLHPA7ILTjQ8NF1D+Sg4H3dr 4Vfhv192jznWAIeTpElFvvlvHV6lKJEpxfeLua1XaiE3hPv3ut1DCIB74+rsFVB+FNfOnkdiIAx00 t786eC/9sulX+1TTZHbkoJjfKbIrkBP6DOUVwivp7LAB/KshIiTMjqHLpUXDYFYL6KLw6A0aooND4 96xPaclQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sw3-002xE7-0e; Thu, 15 Jul 2021 04:25:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 060/138] mm/migrate: Add folio_migrate_mapping() Date: Thu, 15 Jul 2021 04:35:46 +0100 Message-Id: <20210715033704.692967-61-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="TQ/19MT5"; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ej5odxbja5dr1zorz85oiayhzikum44r X-Rspamd-Queue-Id: 213AFF000136 X-HE-Tag: 1626323181-195374 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement migrate_page_move_mapping() as a wrapper around folio_migrate_mapping(). Saves 193 bytes of kernel text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/migrate.h | 2 + mm/folio-compat.c | 11 ++++++ mm/migrate.c | 85 +++++++++++++++++++++-------------------- 3 files changed, 57 insertions(+), 41 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 23dadf7aeba8..eb14495a1f46 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -51,6 +51,8 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); +int folio_migrate_mapping(struct address_space *mapping, + struct folio *newfolio, struct folio *folio, int extra_count); #else static inline void putback_movable_pages(struct list_head *l) {} diff --git a/mm/folio-compat.c b/mm/folio-compat.c index a374747ae1c6..d883d964fd52 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -4,6 +4,7 @@ * eventually. */ +#include #include #include @@ -48,3 +49,13 @@ void mark_page_accessed(struct page *page) folio_mark_accessed(page_folio(page)); } EXPORT_SYMBOL(mark_page_accessed); + +#ifdef CONFIG_MIGRATION +int migrate_page_move_mapping(struct address_space *mapping, + struct page *newpage, struct page *page, int extra_count) +{ + return folio_migrate_mapping(mapping, page_folio(newpage), + page_folio(page), extra_count); +} +EXPORT_SYMBOL(migrate_page_move_mapping); +#endif diff --git a/mm/migrate.c b/mm/migrate.c index 910552318df3..aa4f2310c5bb 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -363,7 +363,7 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) */ expected_count += is_device_private_page(page); if (mapping) - expected_count += thp_nr_pages(page) + page_has_private(page); + expected_count += compound_nr(page) + page_has_private(page); return expected_count; } @@ -376,74 +376,75 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) * 2 for pages with a mapping * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. */ -int migrate_page_move_mapping(struct address_space *mapping, - struct page *newpage, struct page *page, int extra_count) +int folio_migrate_mapping(struct address_space *mapping, + struct folio *newfolio, struct folio *folio, int extra_count) { - XA_STATE(xas, &mapping->i_pages, page_index(page)); + XA_STATE(xas, &mapping->i_pages, folio_index(folio)); struct zone *oldzone, *newzone; int dirty; - int expected_count = expected_page_refs(mapping, page) + extra_count; - int nr = thp_nr_pages(page); + int expected_count = expected_page_refs(mapping, &folio->page) + extra_count; + int nr = folio_nr_pages(folio); if (!mapping) { /* Anonymous page without mapping */ - if (page_count(page) != expected_count) + if (folio_ref_count(folio) != expected_count) return -EAGAIN; /* No turning back from here */ - newpage->index = page->index; - newpage->mapping = page->mapping; - if (PageSwapBacked(page)) - __SetPageSwapBacked(newpage); + newfolio->index = folio->index; + newfolio->mapping = folio->mapping; + if (folio_test_swapbacked(folio)) + __folio_set_swapbacked(newfolio); return MIGRATEPAGE_SUCCESS; } - oldzone = page_zone(page); - newzone = page_zone(newpage); + oldzone = folio_zone(folio); + newzone = folio_zone(newfolio); xas_lock_irq(&xas); - if (page_count(page) != expected_count || xas_load(&xas) != page) { + if (folio_ref_count(folio) != expected_count || + xas_load(&xas) != folio) { xas_unlock_irq(&xas); return -EAGAIN; } - if (!page_ref_freeze(page, expected_count)) { + if (!folio_ref_freeze(folio, expected_count)) { xas_unlock_irq(&xas); return -EAGAIN; } /* - * Now we know that no one else is looking at the page: + * Now we know that no one else is looking at the folio: * no turning back from here. */ - newpage->index = page->index; - newpage->mapping = page->mapping; - page_ref_add(newpage, nr); /* add cache reference */ - if (PageSwapBacked(page)) { - __SetPageSwapBacked(newpage); - if (PageSwapCache(page)) { - SetPageSwapCache(newpage); - set_page_private(newpage, page_private(page)); + newfolio->index = folio->index; + newfolio->mapping = folio->mapping; + folio_ref_add(newfolio, nr); /* add cache reference */ + if (folio_test_swapbacked(folio)) { + __folio_set_swapbacked(newfolio); + if (folio_test_swapcache(folio)) { + folio_set_swapcache(newfolio); + newfolio->private = folio_get_private(folio); } } else { - VM_BUG_ON_PAGE(PageSwapCache(page), page); + VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); } /* Move dirty while page refs frozen and newpage not yet exposed */ - dirty = PageDirty(page); + dirty = folio_test_dirty(folio); if (dirty) { - ClearPageDirty(page); - SetPageDirty(newpage); + folio_clear_dirty(folio); + folio_set_dirty(newfolio); } - xas_store(&xas, newpage); - if (PageTransHuge(page)) { + xas_store(&xas, newfolio); + if (nr > 1) { int i; for (i = 1; i < nr; i++) { xas_next(&xas); - xas_store(&xas, newpage); + xas_store(&xas, newfolio); } } @@ -452,7 +453,7 @@ int migrate_page_move_mapping(struct address_space *mapping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - nr); + folio_ref_unfreeze(folio, expected_count - nr); xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -471,18 +472,18 @@ int migrate_page_move_mapping(struct address_space *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; - memcg = page_memcg(page); + memcg = folio_memcg(folio); old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); - if (PageSwapBacked(page) && !PageSwapCache(page)) { + if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) { __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } #ifdef CONFIG_SWAP - if (PageSwapCache(page)) { + if (folio_test_swapcache(folio)) { __mod_lruvec_state(old_lruvec, NR_SWAPCACHE, -nr); __mod_lruvec_state(new_lruvec, NR_SWAPCACHE, nr); } @@ -498,11 +499,11 @@ int migrate_page_move_mapping(struct address_space *mapping, return MIGRATEPAGE_SUCCESS; } -EXPORT_SYMBOL(migrate_page_move_mapping); +EXPORT_SYMBOL(folio_migrate_mapping); /* * The expected number of remaining references is the same as that - * of migrate_page_move_mapping(). + * of folio_migrate_mapping(). */ int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page) @@ -563,7 +564,7 @@ void migrate_page_states(struct page *newpage, struct page *page) if (PageMappedToDisk(page)) SetPageMappedToDisk(newpage); - /* Move dirty on pages not done by migrate_page_move_mapping() */ + /* Move dirty on pages not done by folio_migrate_mapping() */ if (PageDirty(page)) SetPageDirty(newpage); @@ -639,11 +640,13 @@ int migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *newfolio = page_folio(newpage); + struct folio *folio = page_folio(page); int rc; - BUG_ON(PageWriteback(page)); /* Writeback must be complete */ + BUG_ON(folio_test_writeback(folio)); /* Writeback must be complete */ - rc = migrate_page_move_mapping(mapping, newpage, page, 0); + rc = folio_migrate_mapping(mapping, newfolio, folio, 0); if (rc != MIGRATEPAGE_SUCCESS) return rc; @@ -2387,7 +2390,7 @@ static void migrate_vma_collect(struct migrate_vma *migrate) * @page: struct page to check * * Pinned pages cannot be migrated. This is the same test as in - * migrate_page_move_mapping(), except that here we allow migration of a + * folio_migrate_mapping(), except that here we allow migration of a * ZONE_DEVICE page. */ static bool migrate_vma_check_page(struct page *page) From patchwork Thu Jul 15 03:35:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54E04C47E50 for ; Thu, 15 Jul 2021 04:27:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F136B613A9 for ; Thu, 15 Jul 2021 04:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F136B613A9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 412426B0156; Thu, 15 Jul 2021 00:27:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E8EF6B0157; Thu, 15 Jul 2021 00:27:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28AC28D0049; Thu, 15 Jul 2021 00:27:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 014716B0157 for ; Thu, 15 Jul 2021 00:27:13 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D783D184B3578 for ; Thu, 15 Jul 2021 04:27:12 +0000 (UTC) X-FDA: 78363537504.18.37CA732 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 5E5C8400208D for ; Thu, 15 Jul 2021 04:27:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W+gx+KpLcRiFJw54VT39FE+dDAoPUQY74UTXIQApCwU=; b=gQRS6zW3Xiqls6dnRao3LjWaVP Ujxgt42k3TeBz35edde+quLsoSS364uXxHuBUiZl+ckGELzW3LQi8Enc7HtR9kgWg0NWWS13QMPff R9v9CF2wCsmf4zi9kSxCLPPt84aL7Rclxl6T35y8emA6B+vjr1X3r9pTnDIzKN9Ehi0XK1pXPSQyt 0pQNc5IMTZq7bi149CS1Hss69xtlgrN6zrh/umxBAlXmybJfNyQqX/gHEv6l/aJIOEpaNSYxurIrJ 8+SM+Nejozq3gA4SFsRrMOllWS4ryw+Sd8hPk0yclbnl6/2Dir8yZaMon72MjN0vXhRucv46SsFDC NOubgQdg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3swc-002xGI-RL; Thu, 15 Jul 2021 04:26:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 061/138] mm/migrate: Add folio_migrate_flags() Date: Thu, 15 Jul 2021 04:35:47 +0100 Message-Id: <20210715033704.692967-62-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=gQRS6zW3; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5E5C8400208D X-Stat-Signature: 1onqo7yq18eec4dnsxho8z6epxf1ad5p X-HE-Tag: 1626323232-675172 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn migrate_page_states() into a wrapper around folio_migrate_flags(). Also convert two functions only called from folio_migrate_flags() to be folio-based. ksm_migrate_page() becomes folio_migrate_ksm() and copy_page_owner() becomes folio_copy_owner(). folio_migrate_flags() alone shrinks by two thirds -- 1967 bytes down to 642 bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/ksm.h | 4 +- include/linux/migrate.h | 1 + include/linux/page_owner.h | 8 ++-- mm/folio-compat.c | 6 +++ mm/ksm.c | 31 ++++++++------ mm/migrate.c | 84 +++++++++++++++++++------------------- mm/page_owner.c | 10 ++--- 7 files changed, 77 insertions(+), 67 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 161e8164abcf..a38a5bca1ba5 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -52,7 +52,7 @@ struct page *ksm_might_need_to_copy(struct page *page, struct vm_area_struct *vma, unsigned long address); void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); -void ksm_migrate_page(struct page *newpage, struct page *oldpage); +void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); #else /* !CONFIG_KSM */ @@ -83,7 +83,7 @@ static inline void rmap_walk_ksm(struct page *page, { } -static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage) +static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old) { } #endif /* CONFIG_MMU */ diff --git a/include/linux/migrate.h b/include/linux/migrate.h index eb14495a1f46..ba0a554b3eae 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -51,6 +51,7 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); +void folio_migrate_flags(struct folio *newfolio, struct folio *folio); int folio_migrate_mapping(struct address_space *mapping, struct folio *newfolio, struct folio *folio, int extra_count); #else diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 719bfe5108c5..43c638c51c1f 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -12,7 +12,7 @@ extern void __reset_page_owner(struct page *page, unsigned int order); extern void __set_page_owner(struct page *page, unsigned int order, gfp_t gfp_mask); extern void __split_page_owner(struct page *page, unsigned int nr); -extern void __copy_page_owner(struct page *oldpage, struct page *newpage); +extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); extern void pagetypeinfo_showmixedcount_print(struct seq_file *m, @@ -36,10 +36,10 @@ static inline void split_page_owner(struct page *page, unsigned int nr) if (static_branch_unlikely(&page_owner_inited)) __split_page_owner(page, nr); } -static inline void copy_page_owner(struct page *oldpage, struct page *newpage) +static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { if (static_branch_unlikely(&page_owner_inited)) - __copy_page_owner(oldpage, newpage); + __folio_copy_owner(newfolio, old); } static inline void set_page_owner_migrate_reason(struct page *page, int reason) { @@ -63,7 +63,7 @@ static inline void split_page_owner(struct page *page, unsigned int order) { } -static inline void copy_page_owner(struct page *oldpage, struct page *newpage) +static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) { } static inline void set_page_owner_migrate_reason(struct page *page, int reason) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index d883d964fd52..3f00ad92d1ff 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -58,4 +58,10 @@ int migrate_page_move_mapping(struct address_space *mapping, page_folio(page), extra_count); } EXPORT_SYMBOL(migrate_page_move_mapping); + +void migrate_page_states(struct page *newpage, struct page *page) +{ + folio_migrate_flags(page_folio(newpage), page_folio(page)); +} +EXPORT_SYMBOL(migrate_page_states); #endif diff --git a/mm/ksm.c b/mm/ksm.c index 23d36b59f997..3a70786906eb 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -753,7 +753,7 @@ static struct page *get_ksm_page(struct stable_node *stable_node, /* * We come here from above when page->mapping or !PageSwapCache * suggests that the node is stale; but it might be under migration. - * We need smp_rmb(), matching the smp_wmb() in ksm_migrate_page(), + * We need smp_rmb(), matching the smp_wmb() in folio_migrate_ksm(), * before checking whether node->kpfn has been changed. */ smp_rmb(); @@ -854,9 +854,14 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma, return err; } +static inline struct stable_node *folio_stable_node(struct folio *folio) +{ + return folio_test_ksm(folio) ? folio_raw_mapping(folio) : NULL; +} + static inline struct stable_node *page_stable_node(struct page *page) { - return PageKsm(page) ? page_rmapping(page) : NULL; + return folio_stable_node(page_folio(page)); } static inline void set_page_stable_node(struct page *page, @@ -2661,26 +2666,26 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) } #ifdef CONFIG_MIGRATION -void ksm_migrate_page(struct page *newpage, struct page *oldpage) +void folio_migrate_ksm(struct folio *newfolio, struct folio *folio) { struct stable_node *stable_node; - VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); - VM_BUG_ON_PAGE(newpage->mapping != oldpage->mapping, newpage); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_locked(newfolio), newfolio); + VM_BUG_ON_FOLIO(newfolio->mapping != folio->mapping, newfolio); - stable_node = page_stable_node(newpage); + stable_node = folio_stable_node(folio); if (stable_node) { - VM_BUG_ON_PAGE(stable_node->kpfn != page_to_pfn(oldpage), oldpage); - stable_node->kpfn = page_to_pfn(newpage); + VM_BUG_ON_FOLIO(stable_node->kpfn != folio_pfn(folio), folio); + stable_node->kpfn = folio_pfn(newfolio); /* - * newpage->mapping was set in advance; now we need smp_wmb() + * newfolio->mapping was set in advance; now we need smp_wmb() * to make sure that the new stable_node->kpfn is visible - * to get_ksm_page() before it can see that oldpage->mapping - * has gone stale (or that PageSwapCache has been cleared). + * to get_ksm_page() before it can see that folio->mapping + * has gone stale (or that folio_test_swapcache has been cleared). */ smp_wmb(); - set_page_stable_node(oldpage, NULL); + set_page_stable_node(&folio->page, NULL); } } #endif /* CONFIG_MIGRATION */ diff --git a/mm/migrate.c b/mm/migrate.c index aa4f2310c5bb..a86be2bfc9a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -538,82 +538,80 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, } /* - * Copy the page to its new location + * Copy the flags and some other ancillary information */ -void migrate_page_states(struct page *newpage, struct page *page) +void folio_migrate_flags(struct folio *newfolio, struct folio *folio) { - struct folio *folio = page_folio(page); - struct folio *newfolio = page_folio(newpage); int cpupid; - if (PageError(page)) - SetPageError(newpage); - if (PageReferenced(page)) - SetPageReferenced(newpage); - if (PageUptodate(page)) - SetPageUptodate(newpage); - if (TestClearPageActive(page)) { - VM_BUG_ON_PAGE(PageUnevictable(page), page); - SetPageActive(newpage); - } else if (TestClearPageUnevictable(page)) - SetPageUnevictable(newpage); - if (PageWorkingset(page)) - SetPageWorkingset(newpage); - if (PageChecked(page)) - SetPageChecked(newpage); - if (PageMappedToDisk(page)) - SetPageMappedToDisk(newpage); + if (folio_test_error(folio)) + folio_set_error(newfolio); + if (folio_test_referenced(folio)) + folio_set_referenced(newfolio); + if (folio_test_uptodate(folio)) + folio_mark_uptodate(newfolio); + if (folio_test_clear_active(folio)) { + VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio); + folio_set_active(newfolio); + } else if (folio_test_clear_unevictable(folio)) + folio_set_unevictable(newfolio); + if (folio_test_workingset(folio)) + folio_set_workingset(newfolio); + if (folio_test_checked(folio)) + folio_set_checked(newfolio); + if (folio_test_mappedtodisk(folio)) + folio_set_mappedtodisk(newfolio); /* Move dirty on pages not done by folio_migrate_mapping() */ - if (PageDirty(page)) - SetPageDirty(newpage); + if (folio_test_dirty(folio)) + folio_set_dirty(newfolio); - if (page_is_young(page)) - set_page_young(newpage); - if (page_is_idle(page)) - set_page_idle(newpage); + if (folio_test_young(folio)) + folio_set_young(newfolio); + if (folio_test_idle(folio)) + folio_set_idle(newfolio); /* * Copy NUMA information to the new page, to prevent over-eager * future migrations of this same page. */ - cpupid = page_cpupid_xchg_last(page, -1); - page_cpupid_xchg_last(newpage, cpupid); + cpupid = page_cpupid_xchg_last(&folio->page, -1); + page_cpupid_xchg_last(&newfolio->page, cpupid); - ksm_migrate_page(newpage, page); + folio_migrate_ksm(newfolio, folio); /* * Please do not reorder this without considering how mm/ksm.c's * get_ksm_page() depends upon ksm_migrate_page() and PageSwapCache(). */ - if (PageSwapCache(page)) - ClearPageSwapCache(page); - ClearPagePrivate(page); + if (folio_test_swapcache(folio)) + folio_clear_swapcache(folio); + folio_clear_private(folio); /* page->private contains hugetlb specific flags */ - if (!PageHuge(page)) - set_page_private(page, 0); + if (!folio_test_hugetlb(folio)) + folio->private = NULL; /* * If any waiters have accumulated on the new page then * wake them up. */ - if (PageWriteback(newpage)) - end_page_writeback(newpage); + if (folio_test_writeback(newfolio)) + folio_end_writeback(newfolio); /* * PG_readahead shares the same bit with PG_reclaim. The above * end_page_writeback() may clear PG_readahead mistakenly, so set the * bit after that. */ - if (PageReadahead(page)) - SetPageReadahead(newpage); + if (folio_test_readahead(folio)) + folio_set_readahead(newfolio); - copy_page_owner(page, newpage); + folio_copy_owner(folio, newfolio); - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) mem_cgroup_migrate(folio, newfolio); } -EXPORT_SYMBOL(migrate_page_states); +EXPORT_SYMBOL(folio_migrate_flags); void migrate_page_copy(struct page *newpage, struct page *page) { @@ -654,7 +652,7 @@ int migrate_page(struct address_space *mapping, if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else - migrate_page_states(newpage, page); + folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; } EXPORT_SYMBOL(migrate_page); diff --git a/mm/page_owner.c b/mm/page_owner.c index f51a57e92aa3..23bfb074ca3f 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -210,10 +210,10 @@ void __split_page_owner(struct page *page, unsigned int nr) } } -void __copy_page_owner(struct page *oldpage, struct page *newpage) +void __folio_copy_owner(struct folio *newfolio, struct folio *old) { - struct page_ext *old_ext = lookup_page_ext(oldpage); - struct page_ext *new_ext = lookup_page_ext(newpage); + struct page_ext *old_ext = lookup_page_ext(&old->page); + struct page_ext *new_ext = lookup_page_ext(&newfolio->page); struct page_owner *old_page_owner, *new_page_owner; if (unlikely(!old_ext || !new_ext)) @@ -231,11 +231,11 @@ void __copy_page_owner(struct page *oldpage, struct page *newpage) new_page_owner->free_ts_nsec = old_page_owner->ts_nsec; /* - * We don't clear the bit on the oldpage as it's going to be freed + * We don't clear the bit on the old folio as it's going to be freed * after migration. Until then, the info can be useful in case of * a bug, and the overall stats will be off a bit only temporarily. * Also, migrate_misplaced_transhuge_page() can still fail the - * migration and then we want the oldpage to retain the info. But + * migration and then we want the old folio to retain the info. But * in that case we also don't need to explicitly clear the info from * the new page, which will be freed. */ From patchwork Thu Jul 15 03:35:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4277C07E96 for ; Thu, 15 Jul 2021 04:27:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 591E061370 for ; Thu, 15 Jul 2021 04:27:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 591E061370 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA11D6B0157; Thu, 15 Jul 2021 00:27:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A775C6B015A; Thu, 15 Jul 2021 00:27:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F23C6B015B; Thu, 15 Jul 2021 00:27:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 6C1B66B0157 for ; Thu, 15 Jul 2021 00:27:47 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6345B2197E for ; Thu, 15 Jul 2021 04:27:46 +0000 (UTC) X-FDA: 78363538932.38.EA2B5D5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 2128410072E8 for ; Thu, 15 Jul 2021 04:27:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CFvJkkEN8s/6lLPeZLAt71H4VLu4g1kSLOabcpJEMsA=; b=Dvw4p2WXnfiSw3lLx59yQKyeq3 mlFMF/LTsapJ6bJx94piaLHVRdoSIu3rBa5Y0GCyzh3fhLhtEPepnfiwiTNSPM4WrOSFh3sA8ljpA lG+aBpAajvwUJQgI4ACJh3npYaKzhN6IjpWnybCzNP6Zsy1/HafxfjzME9duHAYu3wLQMYvmUoWU+ 7WMMtuiGtbsCAh56LcbJR0LfLOFplVWw+wagoz3MUpOJIo3qarirx6v0nh0KEvrFvKo6nXUCg7b16 wrmC4m/0TPLEwilNnL5UYQkObwXCsJ3qaNjn46INpn1DnhtiJBbZlLGQO39/39teCFlvLpCOB/2Vv Z8nk9G2Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sxE-002xIW-0h; Thu, 15 Jul 2021 04:26:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 062/138] mm/migrate: Add folio_migrate_copy() Date: Thu, 15 Jul 2021 04:35:48 +0100 Message-Id: <20210715033704.692967-63-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dvw4p2WX; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ofbzkomycgafe36y1endxim51zsbhc9d X-Rspamd-Queue-Id: 2128410072E8 X-HE-Tag: 1626323265-72203 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of migrate_page_copy(), which is retained as a wrapper for filesystems which are not yet converted to folios. Also convert copy_huge_page() to folio_copy(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka --- include/linux/migrate.h | 1 + include/linux/mm.h | 2 +- mm/folio-compat.c | 6 ++++++ mm/hugetlb.c | 2 +- mm/migrate.c | 14 +++++--------- mm/util.c | 6 +++--- 6 files changed, 17 insertions(+), 14 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index ba0a554b3eae..6a01de9faff5 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -52,6 +52,7 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); void folio_migrate_flags(struct folio *newfolio, struct folio *folio); +void folio_migrate_copy(struct folio *newfolio, struct folio *folio); int folio_migrate_mapping(struct address_space *mapping, struct folio *newfolio, struct folio *folio, int extra_count); #else diff --git a/include/linux/mm.h b/include/linux/mm.h index deb0f5efaa65..23276330ef4f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -911,7 +911,7 @@ void __put_page(struct page *page); void put_pages_list(struct list_head *pages); void split_page(struct page *page, unsigned int order); -void copy_huge_page(struct page *dst, struct page *src); +void folio_copy(struct folio *dst, struct folio *src); /* * Compound pages have a destructor function. Provide a diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 3f00ad92d1ff..2ccd8f213fc4 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -64,4 +64,10 @@ void migrate_page_states(struct page *newpage, struct page *page) folio_migrate_flags(page_folio(newpage), page_folio(page)); } EXPORT_SYMBOL(migrate_page_states); + +void migrate_page_copy(struct page *newpage, struct page *page) +{ + folio_migrate_copy(page_folio(newpage), page_folio(page)); +} +EXPORT_SYMBOL(migrate_page_copy); #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 924553aa8f78..b46f9d09aa94 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5200,7 +5200,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, *pagep = NULL; goto out; } - copy_huge_page(page, *pagep); + folio_copy(page_folio(page), page_folio(*pagep)); put_page(*pagep); *pagep = NULL; } diff --git a/mm/migrate.c b/mm/migrate.c index a86be2bfc9a1..36cdae0a1235 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -613,16 +613,12 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) } EXPORT_SYMBOL(folio_migrate_flags); -void migrate_page_copy(struct page *newpage, struct page *page) +void folio_migrate_copy(struct folio *newfolio, struct folio *folio) { - if (PageHuge(page) || PageTransHuge(page)) - copy_huge_page(newpage, page); - else - copy_highpage(newpage, page); - - migrate_page_states(newpage, page); + folio_copy(newfolio, folio); + folio_migrate_flags(newfolio, folio); } -EXPORT_SYMBOL(migrate_page_copy); +EXPORT_SYMBOL(folio_migrate_copy); /************************************************************ * Migration functions @@ -650,7 +646,7 @@ int migrate_page(struct address_space *mapping, return rc; if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + folio_migrate_copy(newfolio, folio); else folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; diff --git a/mm/util.c b/mm/util.c index 149537120a91..904a75612307 100644 --- a/mm/util.c +++ b/mm/util.c @@ -728,13 +728,13 @@ int __page_mapcount(struct page *page) } EXPORT_SYMBOL_GPL(__page_mapcount); -void copy_huge_page(struct page *dst, struct page *src) +void folio_copy(struct folio *dst, struct folio *src) { - unsigned i, nr = compound_nr(src); + unsigned i, nr = folio_nr_pages(src); for (i = 0; i < nr; i++) { cond_resched(); - copy_highpage(nth_page(dst, i), nth_page(src, i)); + copy_highpage(folio_page(dst, i), folio_page(src, i)); } } From patchwork Thu Jul 15 03:35:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11FEEC07E96 for ; Thu, 15 Jul 2021 04:29:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B627C61360 for ; Thu, 15 Jul 2021 04:29:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B627C61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 08DD76B015A; Thu, 15 Jul 2021 00:29:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 03D226B015C; Thu, 15 Jul 2021 00:29:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E203E6B015D; Thu, 15 Jul 2021 00:29:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id B83E36B015A for ; Thu, 15 Jul 2021 00:29:01 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A73C8181DF741 for ; Thu, 15 Jul 2021 04:29:00 +0000 (UTC) X-FDA: 78363542040.36.A2AB894 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 6FDA710072E5 for ; Thu, 15 Jul 2021 04:29:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HkzEL+P4RoA9lEpGuWVdgdutSkoMwcFoNSDxLURGf2I=; b=BRULnDI2ti7j2ouCrbmnXR9nWY 9gPVfLu3UWHehgsGF3J9UnNXRovwPaxQarXu+iBmAsm0r2yaXynXymy8ewSpVtkmUOwqu/UHG655E mW01YzvYR7MlfRxK5AKJS2rtXww0MLP8/5ScO7Lpas5JY/m8VA8intlNA7b63mCsnxfcR81mRlBvw cCmHNnhZwSKCmycowBsypr72PFe77ukWoqnCHst3xaIXNUKtbQ3k1kjwxmSN5Pi9BQxALvnTumkGV QZ1icbsUQJOoRHWfs50N2xSJSNM0/YTkPdG63EDNfAg22HgBMfBgg/MuJgXtiFqY4v8uVb2QuWerz woObk+PQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sxq-002xM8-Tm; Thu, 15 Jul 2021 04:27:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 063/138] mm/writeback: Rename __add_wb_stat() to wb_stat_mod() Date: Thu, 15 Jul 2021 04:35:49 +0100 Message-Id: <20210715033704.692967-64-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BRULnDI2; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: cyya8w1zk198n97sstm8ege1tkbqntmj X-Rspamd-Queue-Id: 6FDA710072E5 X-HE-Tag: 1626323340-696157 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make this look like the newly renamed vmstat functions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/backing-dev.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h index 44df4fcef65c..a852876bb6e2 100644 --- a/include/linux/backing-dev.h +++ b/include/linux/backing-dev.h @@ -64,7 +64,7 @@ static inline bool bdi_has_dirty_io(struct backing_dev_info *bdi) return atomic_long_read(&bdi->tot_write_bandwidth); } -static inline void __add_wb_stat(struct bdi_writeback *wb, +static inline void wb_stat_mod(struct bdi_writeback *wb, enum wb_stat_item item, s64 amount) { percpu_counter_add_batch(&wb->stat[item], amount, WB_STAT_BATCH); @@ -72,12 +72,12 @@ static inline void __add_wb_stat(struct bdi_writeback *wb, static inline void inc_wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) { - __add_wb_stat(wb, item, 1); + wb_stat_mod(wb, item, 1); } static inline void dec_wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) { - __add_wb_stat(wb, item, -1); + wb_stat_mod(wb, item, -1); } static inline s64 wb_stat(struct bdi_writeback *wb, enum wb_stat_item item) From patchwork Thu Jul 15 03:35:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D51F8C07E96 for ; Thu, 15 Jul 2021 04:29:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 89EEE6117A for ; Thu, 15 Jul 2021 04:29:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89EEE6117A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D84ED6B015C; Thu, 15 Jul 2021 00:29:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D35A26B015E; Thu, 15 Jul 2021 00:29:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFCD16B015F; Thu, 15 Jul 2021 00:29:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 96FDA6B015C for ; Thu, 15 Jul 2021 00:29:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B71388245571 for ; Thu, 15 Jul 2021 04:29:39 +0000 (UTC) X-FDA: 78363543678.08.019E027 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 758843000190 for ; Thu, 15 Jul 2021 04:29:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aqelWOdE9he82qcK+Xkgh5XZDZXAn149Pewl+8PCiDw=; b=RVkBmvRsI+PUyfpGcXKHF4oyS5 xTKTbgnUaHtGz9M3MFuTcd5MN3i1YyMicq8j9Y0Jti0BETqx1XUuFfMGHlAaW3LcKvdg243L04n+L f8snDymJXTPcBxwLAn4eTaXy/zpkaBpDe+c1bb7pkeOZmUx13grS/rcUpdYVNZwYeTdndW++0MnlH WsqmM0AuiCwMczaBdudRhTlQhocnhjd3BZydK1hmajernSTb3SEH3o7QDbFf6U2GzxtfTFePa3m7B FIMevDqfRx9pD8eNrOUjhGgNnvB/6IQaZnI9VvsKUiu5d+D1nQqHoVX17UCm1r6UNtVkgtzrQ5mEc 5Jg8aZOg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3syg-002xU5-DN; Thu, 15 Jul 2021 04:28:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jan Kara Subject: [PATCH v14 064/138] flex_proportions: Allow N events instead of 1 Date: Thu, 15 Jul 2021 04:35:50 +0100 Message-Id: <20210715033704.692967-65-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 758843000190 X-Stat-Signature: mio9paxb1t8uxu7g8y38b1y4tef6zzhd Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RVkBmvRs; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626323379-434607 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When batching events (such as writing back N pages in a single I/O), it is better to do one flex_proportion operation instead of N. There is only one caller of __fprop_inc_percpu_max(), and it's the one we're going to change in the next patch, so rename it instead of adding a compatibility wrapper. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara --- include/linux/flex_proportions.h | 9 +++++---- lib/flex_proportions.c | 28 +++++++++++++++++++--------- mm/page-writeback.c | 4 ++-- 3 files changed, 26 insertions(+), 15 deletions(-) diff --git a/include/linux/flex_proportions.h b/include/linux/flex_proportions.h index c12df59d3f5f..3e378b1fb0bc 100644 --- a/include/linux/flex_proportions.h +++ b/include/linux/flex_proportions.h @@ -83,9 +83,10 @@ struct fprop_local_percpu { int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp); void fprop_local_destroy_percpu(struct fprop_local_percpu *pl); -void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl); -void __fprop_inc_percpu_max(struct fprop_global *p, struct fprop_local_percpu *pl, - int max_frac); +void __fprop_add_percpu(struct fprop_global *p, struct fprop_local_percpu *pl, + long nr); +void __fprop_add_percpu_max(struct fprop_global *p, + struct fprop_local_percpu *pl, int max_frac, long nr); void fprop_fraction_percpu(struct fprop_global *p, struct fprop_local_percpu *pl, unsigned long *numerator, unsigned long *denominator); @@ -96,7 +97,7 @@ void fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl) unsigned long flags; local_irq_save(flags); - __fprop_inc_percpu(p, pl); + __fprop_add_percpu(p, pl, 1); local_irq_restore(flags); } diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c index 451543937524..53e7eb1dd76c 100644 --- a/lib/flex_proportions.c +++ b/lib/flex_proportions.c @@ -217,11 +217,12 @@ static void fprop_reflect_period_percpu(struct fprop_global *p, } /* Event of type pl happened */ -void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl) +void __fprop_add_percpu(struct fprop_global *p, struct fprop_local_percpu *pl, + long nr) { fprop_reflect_period_percpu(p, pl); - percpu_counter_add_batch(&pl->events, 1, PROP_BATCH); - percpu_counter_add(&p->events, 1); + percpu_counter_add_batch(&pl->events, nr, PROP_BATCH); + percpu_counter_add(&p->events, nr); } void fprop_fraction_percpu(struct fprop_global *p, @@ -253,20 +254,29 @@ void fprop_fraction_percpu(struct fprop_global *p, } /* - * Like __fprop_inc_percpu() except that event is counted only if the given + * Like __fprop_add_percpu() except that event is counted only if the given * type has fraction smaller than @max_frac/FPROP_FRAC_BASE */ -void __fprop_inc_percpu_max(struct fprop_global *p, - struct fprop_local_percpu *pl, int max_frac) +void __fprop_add_percpu_max(struct fprop_global *p, + struct fprop_local_percpu *pl, int max_frac, long nr) { if (unlikely(max_frac < FPROP_FRAC_BASE)) { unsigned long numerator, denominator; + s64 tmp; fprop_fraction_percpu(p, pl, &numerator, &denominator); - if (numerator > - (((u64)denominator) * max_frac) >> FPROP_FRAC_SHIFT) + /* Adding 'nr' to fraction exceeds max_frac/FPROP_FRAC_BASE? */ + tmp = (u64)denominator * max_frac - + ((u64)numerator << FPROP_FRAC_SHIFT); + if (tmp < 0) { + /* Maximum fraction already exceeded? */ return; + } else if (tmp < nr * (FPROP_FRAC_BASE - max_frac)) { + /* Add just enough for the fraction to saturate */ + nr = div_u64(tmp + FPROP_FRAC_BASE - max_frac - 1, + FPROP_FRAC_BASE - max_frac); + } } - __fprop_inc_percpu(p, pl); + __fprop_add_percpu(p, pl, nr); } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index b34278d05395..f55f2ebdd9a9 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -566,8 +566,8 @@ static void wb_domain_writeout_inc(struct wb_domain *dom, struct fprop_local_percpu *completions, unsigned int max_prop_frac) { - __fprop_inc_percpu_max(&dom->completions, completions, - max_prop_frac); + __fprop_add_percpu_max(&dom->completions, completions, + max_prop_frac, 1); /* First event after period switching was turned off? */ if (unlikely(!dom->period_time)) { /* From patchwork Thu Jul 15 03:35:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04735C47E48 for ; Thu, 15 Jul 2021 04:30:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AEEB861360 for ; Thu, 15 Jul 2021 04:30:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEEB861360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 064D06B015E; Thu, 15 Jul 2021 00:30:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 03B506B0160; Thu, 15 Jul 2021 00:30:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1E9F6B0161; Thu, 15 Jul 2021 00:30:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id BDE1B6B015E for ; Thu, 15 Jul 2021 00:30:02 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A3D9918484C8C for ; Thu, 15 Jul 2021 04:30:01 +0000 (UTC) X-FDA: 78363544602.05.212CA9F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 466795012D4F for ; Thu, 15 Jul 2021 04:30:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=giKDflwn+RmAPKF9gNtahm1hPB6l329Z4O/Dgj1Q+Z8=; b=CEqWNbQgkocoT/XSJy7r3ZluiU sr2T1N+bQY25ZBs9quL4WbiPCW100qmOJ+O576mHt/YpVTddZBw6YVjyU0Wp8t8hmC5O5RtYJZzSQ nGx1g+ahpQjdYGmSEwkN1MBcqFrj+30wfDUrVazjyd9xBeyRgP+79ENPVwvMyMsX0Ghtt3Mx6eKaz Jwkzci2zZvmRcVg5Lm0SSYNCj+Wetd0bLkj5Eo6KuCVQArKTUuTXpFMkCcyYWdYrVpzj6Y6wUXUZh dDdioCffnCsKWmS5KdViDKDj55t2Kky2bim6wPWTGBSuDB+mWx4sc3MWhPRbzz6WZwRrs9tatlvUH GEFw8IBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3szM-002xXf-8L; Thu, 15 Jul 2021 04:28:56 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jan Kara Subject: [PATCH v14 065/138] mm/writeback: Change __wb_writeout_inc() to __wb_writeout_add() Date: Thu, 15 Jul 2021 04:35:51 +0100 Message-Id: <20210715033704.692967-66-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CEqWNbQg; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 7gcpdbcy346cqd136h6gen43r4yjxzi6 X-Rspamd-Queue-Id: 466795012D4F X-HE-Tag: 1626323401-963469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow for accounting N pages at once instead of one page at a time. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/page-writeback.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index f55f2ebdd9a9..e542ea37d605 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -562,12 +562,12 @@ static unsigned long wp_next_time(unsigned long cur_time) return cur_time; } -static void wb_domain_writeout_inc(struct wb_domain *dom, +static void wb_domain_writeout_add(struct wb_domain *dom, struct fprop_local_percpu *completions, - unsigned int max_prop_frac) + unsigned int max_prop_frac, long nr) { __fprop_add_percpu_max(&dom->completions, completions, - max_prop_frac, 1); + max_prop_frac, nr); /* First event after period switching was turned off? */ if (unlikely(!dom->period_time)) { /* @@ -585,18 +585,18 @@ static void wb_domain_writeout_inc(struct wb_domain *dom, * Increment @wb's writeout completion count and the global writeout * completion count. Called from test_clear_page_writeback(). */ -static inline void __wb_writeout_inc(struct bdi_writeback *wb) +static inline void __wb_writeout_add(struct bdi_writeback *wb, long nr) { struct wb_domain *cgdom; - inc_wb_stat(wb, WB_WRITTEN); - wb_domain_writeout_inc(&global_wb_domain, &wb->completions, - wb->bdi->max_prop_frac); + wb_stat_mod(wb, WB_WRITTEN, nr); + wb_domain_writeout_add(&global_wb_domain, &wb->completions, + wb->bdi->max_prop_frac, nr); cgdom = mem_cgroup_wb_domain(wb); if (cgdom) - wb_domain_writeout_inc(cgdom, wb_memcg_completions(wb), - wb->bdi->max_prop_frac); + wb_domain_writeout_add(cgdom, wb_memcg_completions(wb), + wb->bdi->max_prop_frac, nr); } void wb_writeout_inc(struct bdi_writeback *wb) @@ -604,7 +604,7 @@ void wb_writeout_inc(struct bdi_writeback *wb) unsigned long flags; local_irq_save(flags); - __wb_writeout_inc(wb); + __wb_writeout_add(wb, 1); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(wb_writeout_inc); @@ -2751,7 +2751,7 @@ int test_clear_page_writeback(struct page *page) struct bdi_writeback *wb = inode_to_wb(inode); dec_wb_stat(wb, WB_WRITEBACK); - __wb_writeout_inc(wb); + __wb_writeout_add(wb, 1); } } From patchwork Thu Jul 15 03:35:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68A5FC07E96 for ; Thu, 15 Jul 2021 04:30:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 093146136E for ; Thu, 15 Jul 2021 04:30:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 093146136E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D0876B0160; Thu, 15 Jul 2021 00:30:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 480806B0162; Thu, 15 Jul 2021 00:30:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 322156B0163; Thu, 15 Jul 2021 00:30:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 0FBBB6B0160 for ; Thu, 15 Jul 2021 00:30:45 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 01DC7180C44B4 for ; Thu, 15 Jul 2021 04:30:44 +0000 (UTC) X-FDA: 78363546408.25.B672020 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id AF2FA90000B2 for ; Thu, 15 Jul 2021 04:30:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Hf7dlVTrrxzjorpFuDOy2LkOvKLbhUufEWJ9AFfF4wA=; b=Wsz6NRi/jqgHRaaxE06zt5CDYY WXHL5GKedaaUXt3M5baPA/jxi3BrEWcCVsZRBPDi1UfSFHtJi7HlEchG+ox+HXmY0YUS+/6pqcsfa avvTMnCGqdJTPCqnsLgslSiERec5S6Z1I8EXN6tdgAmwc8VF2KXPfeTLUMoOGWCLb1mUf4oH8Gi2s YSvjXLpBjWTxA6Lvom4qlJy2f3bJxHZkoSkCFCyqQjvaAZS4UvamwQXdMk9lhvaYxlVwQ8AH3eGPv HC4M/03D8RIe2EIlPSDjohh7oAI8Zimu1ImAulH8hHTtXb5TRVlWZFV4bGTUHDlOEDwEpH+5BTvdw GtWOCB9g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3szl-002xZ7-Eu; Thu, 15 Jul 2021 04:29:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 066/138] mm/writeback: Add __folio_end_writeback() Date: Thu, 15 Jul 2021 04:35:52 +0100 Message-Id: <20210715033704.692967-67-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Wsz6NRi/"; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 3g9o94dac7q9zcrmpb9jy4qbohpgrudz X-Rspamd-Queue-Id: AF2FA90000B2 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323443-929444 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: test_clear_page_writeback() is actually an mm-internal function, although it's named as if it's a pagecache function. Move it to mm/internal.h, rename it to __folio_end_writeback() and change the return type to bool. The conversion from page to folio is mostly about accounting the number of pages being written back, although it does eliminate a couple of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/page-flags.h | 1 - mm/filemap.c | 2 +- mm/internal.h | 1 + mm/page-writeback.c | 29 +++++++++++++++-------------- 4 files changed, 17 insertions(+), 16 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ddb660688086..6f9d1f26b1ef 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -655,7 +655,6 @@ static __always_inline void SetPageUptodate(struct page *page) CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) -int test_clear_page_writeback(struct page *page); int __test_set_page_writeback(struct page *page, bool keep_write); #define test_set_page_writeback(page) \ diff --git a/mm/filemap.c b/mm/filemap.c index 5c4e3185ecb3..a74c69a938ab 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1535,7 +1535,7 @@ void folio_end_writeback(struct folio *folio) * reused before the folio_wake(). */ folio_get(folio); - if (!test_clear_page_writeback(&folio->page)) + if (!__folio_end_writeback(folio)) BUG(); smp_mb__after_atomic(); diff --git a/mm/internal.h b/mm/internal.h index fa31a7f0ed79..08e8a28994d1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -43,6 +43,7 @@ static inline void *folio_raw_mapping(struct folio *folio) vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); +bool __folio_end_writeback(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index e542ea37d605..8d5d7921b157 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -583,7 +583,7 @@ static void wb_domain_writeout_add(struct wb_domain *dom, /* * Increment @wb's writeout completion count and the global writeout - * completion count. Called from test_clear_page_writeback(). + * completion count. Called from __folio_end_writeback(). */ static inline void __wb_writeout_add(struct bdi_writeback *wb, long nr) { @@ -2731,27 +2731,28 @@ int clear_page_dirty_for_io(struct page *page) } EXPORT_SYMBOL(clear_page_dirty_for_io); -int test_clear_page_writeback(struct page *page) +bool __folio_end_writeback(struct folio *folio) { - struct address_space *mapping = page_mapping(page); - int ret; + long nr = folio_nr_pages(folio); + struct address_space *mapping = folio_mapping(folio); + bool ret; - lock_page_memcg(page); + folio_memcg_lock(folio); if (mapping && mapping_use_writeback_tags(mapping)) { struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); unsigned long flags; xa_lock_irqsave(&mapping->i_pages, flags); - ret = TestClearPageWriteback(page); + ret = folio_test_clear_writeback(folio); if (ret) { - __xa_clear_mark(&mapping->i_pages, page_index(page), + __xa_clear_mark(&mapping->i_pages, folio_index(folio), PAGECACHE_TAG_WRITEBACK); if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) { struct bdi_writeback *wb = inode_to_wb(inode); - dec_wb_stat(wb, WB_WRITEBACK); - __wb_writeout_add(wb, 1); + wb_stat_mod(wb, WB_WRITEBACK, -nr); + __wb_writeout_add(wb, nr); } } @@ -2761,14 +2762,14 @@ int test_clear_page_writeback(struct page *page) xa_unlock_irqrestore(&mapping->i_pages, flags); } else { - ret = TestClearPageWriteback(page); + ret = folio_test_clear_writeback(folio); } if (ret) { - dec_lruvec_page_state(page, NR_WRITEBACK); - dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); - inc_node_page_state(page, NR_WRITTEN); + lruvec_stat_mod_folio(folio, NR_WRITEBACK, -nr); + zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr); + node_stat_mod_folio(folio, NR_WRITTEN, nr); } - unlock_page_memcg(page); + folio_memcg_unlock(folio); return ret; } From patchwork Thu Jul 15 03:35:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F3ECC07E96 for ; Thu, 15 Jul 2021 04:31:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED6F06128C for ; Thu, 15 Jul 2021 04:31:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED6F06128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 482696B0162; Thu, 15 Jul 2021 00:31:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 459088D0049; Thu, 15 Jul 2021 00:31:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FA2F6B0165; Thu, 15 Jul 2021 00:31:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 0775A6B0162 for ; Thu, 15 Jul 2021 00:31:17 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F386A1849A3B4 for ; Thu, 15 Jul 2021 04:31:16 +0000 (UTC) X-FDA: 78363547794.14.7CEE9A5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 9AD8E5012D4F for ; Thu, 15 Jul 2021 04:31:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=upWaP5V9owO0LbXy+PsmNxmwvLdkP8m88iqnEAk7sDI=; b=EISnx6UP+xLhpBibPof0Y5UczM Z7RKYjkoweLOmRKYpvdhILliTwGyoheUUOIWbEq3N2l9vNSZZRnBj/N5G0RPZ9KUHMEsQDfyiScet k3NbqxRIgEIvSr7AI8/uDUkTL/ZLzxd7TYljbvXGUpLrpXHw8YItxUBnnDSGFDa6Um9qhyVPuCn4O MGWEeDJIvRAnL4wqSyGU4+FFmXE1ixf2/CtVfjEMWBNv9Bkq/pQBWA+N65dmDTRlgYVza+iVzdjKV Q6EAZ0IRjxjqtCtfi98XmFM4TmMGqBUVdFP9DDkFYEBK4D09tQtjDuDq6Z8FDyFTgnHxtmzNJD5t3 s41xTVyA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t0P-002xb0-Tp; Thu, 15 Jul 2021 04:30:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 067/138] mm/writeback: Add folio_start_writeback() Date: Thu, 15 Jul 2021 04:35:53 +0100 Message-Id: <20210715033704.692967-68-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EISnx6UP; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9AD8E5012D4F X-Stat-Signature: b17kj4dogr8qxtf4y5zccx36zfk7f8on X-HE-Tag: 1626323476-440589 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename set_page_writeback() to folio_start_writeback() to match folio_end_writeback(). Do not bother with wrappers that return void; callers are perfectly capable of ignoring return values. Add wrappers for set_page_writeback(), set_page_writeback_keepwrite() and test_set_page_writeback() for compatibililty with existing filesystems. The main advantage of this patch is getting the statistics right, although it does eliminate a couple of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/page-flags.h | 19 +++++++++--------- mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 40 ++++++++++++++++++++------------------ 3 files changed, 37 insertions(+), 28 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 6f9d1f26b1ef..54c4af35c628 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -655,21 +655,22 @@ static __always_inline void SetPageUptodate(struct page *page) CLEARPAGEFLAG(Uptodate, uptodate, PF_NO_TAIL) -int __test_set_page_writeback(struct page *page, bool keep_write); +bool __folio_start_writeback(struct folio *folio, bool keep_write); +bool set_page_writeback(struct page *page); -#define test_set_page_writeback(page) \ - __test_set_page_writeback(page, false) -#define test_set_page_writeback_keepwrite(page) \ - __test_set_page_writeback(page, true) +#define folio_start_writeback(folio) \ + __folio_start_writeback(folio, false) +#define folio_start_writeback_keepwrite(folio) \ + __folio_start_writeback(folio, true) -static inline void set_page_writeback(struct page *page) +static inline void set_page_writeback_keepwrite(struct page *page) { - test_set_page_writeback(page); + folio_start_writeback_keepwrite(page_folio(page)); } -static inline void set_page_writeback_keepwrite(struct page *page) +static inline bool test_set_page_writeback(struct page *page) { - test_set_page_writeback_keepwrite(page); + return set_page_writeback(page); } __PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 2ccd8f213fc4..10ce5582d869 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -71,3 +71,9 @@ void migrate_page_copy(struct page *newpage, struct page *page) } EXPORT_SYMBOL(migrate_page_copy); #endif + +bool set_page_writeback(struct page *page) +{ + return folio_start_writeback(page_folio(page)); +} +EXPORT_SYMBOL(set_page_writeback); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 8d5d7921b157..0336273154fb 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2773,21 +2773,23 @@ bool __folio_end_writeback(struct folio *folio) return ret; } -int __test_set_page_writeback(struct page *page, bool keep_write) +bool __folio_start_writeback(struct folio *folio, bool keep_write) { - struct address_space *mapping = page_mapping(page); - int ret, access_ret; + long nr = folio_nr_pages(folio); + struct address_space *mapping = folio_mapping(folio); + bool ret; + int access_ret; - lock_page_memcg(page); + folio_memcg_lock(folio); if (mapping && mapping_use_writeback_tags(mapping)) { - XA_STATE(xas, &mapping->i_pages, page_index(page)); + XA_STATE(xas, &mapping->i_pages, folio_index(folio)); struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); unsigned long flags; xas_lock_irqsave(&xas, flags); xas_load(&xas); - ret = TestSetPageWriteback(page); + ret = folio_test_set_writeback(folio); if (!ret) { bool on_wblist; @@ -2796,40 +2798,40 @@ int __test_set_page_writeback(struct page *page, bool keep_write) xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK); if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) - inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK); + wb_stat_mod(inode_to_wb(inode), WB_WRITEBACK, + nr); /* - * We can come through here when swapping anonymous - * pages, so we don't necessarily have an inode to track - * for sync. + * We can come through here when swapping + * anonymous folios, so we don't necessarily + * have an inode to track for sync. */ if (mapping->host && !on_wblist) sb_mark_inode_writeback(mapping->host); } - if (!PageDirty(page)) + if (!folio_test_dirty(folio)) xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); if (!keep_write) xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); xas_unlock_irqrestore(&xas, flags); } else { - ret = TestSetPageWriteback(page); + ret = folio_test_set_writeback(folio); } if (!ret) { - inc_lruvec_page_state(page, NR_WRITEBACK); - inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); + lruvec_stat_mod_folio(folio, NR_WRITEBACK, nr); + zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, nr); } - unlock_page_memcg(page); - access_ret = arch_make_page_accessible(page); + folio_memcg_unlock(folio); + access_ret = arch_make_folio_accessible(folio); /* * If writeback has been triggered on a page that cannot be made * accessible, it is too late to recover here. */ - VM_BUG_ON_PAGE(access_ret != 0, page); + VM_BUG_ON_FOLIO(access_ret != 0, folio); return ret; - } -EXPORT_SYMBOL(__test_set_page_writeback); +EXPORT_SYMBOL(__folio_start_writeback); /** * folio_wait_writeback - Wait for a folio to finish writeback. From patchwork Thu Jul 15 03:35:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6102C1B08C for ; Thu, 15 Jul 2021 04:32:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4133961362 for ; Thu, 15 Jul 2021 04:32:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4133961362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C6388D004A; Thu, 15 Jul 2021 00:32:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 876018D0049; Thu, 15 Jul 2021 00:32:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 717718D004A; Thu, 15 Jul 2021 00:32:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 4E7B48D0049 for ; Thu, 15 Jul 2021 00:32:05 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4519B1F36D for ; Thu, 15 Jul 2021 04:32:04 +0000 (UTC) X-FDA: 78363549768.04.462C7F2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id ED9653000103 for ; Thu, 15 Jul 2021 04:32:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xQjoI7gQK9ZaPd2yc5A5rXOJHMJVziT90rkS1B75TtQ=; b=uYhDEH7u1wTwo6yjv/RakWAoSL tUyCYH8zrcltFk8xSxea9G9GRdnT4UDzmaDbvuAaKk9gqdv9rbz6IGLjhwft5vybc9aiJj4BLAGhg 2v/Dl96dm3beF/t230LEG+OQQYdYFq6mZEr7EV/fK+5srrBBoSfS0PF0s/GgW37FTRAhzH+UFr4TE 2CKltbEGYGorGprE8K8C5hhrH9bVI9mbCKffwPOLlCatBDVcT0wiwH7mzSvrzn8ZqRf3mBFKOCBsN AKoTd6cYPtsbg2XruVdLEA3Ak8vv5Ib4JF392pmYXYlYW9JgSXvmEzW5lx/LlB1Ah/f2QFL4nwhaJ CaWJwLmQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t1D-002xeU-HM; Thu, 15 Jul 2021 04:30:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 068/138] mm/writeback: Add folio_mark_dirty() Date: Thu, 15 Jul 2021 04:35:54 +0100 Message-Id: <20210715033704.692967-69-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uYhDEH7u; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 4cnrfgdzbwfx87d3fmynoexqpfc39abx X-Rspamd-Queue-Id: ED9653000103 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323523-260934 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement set_page_dirty() as a wrapper around folio_mark_dirty(). There is no change to filesystems as they were already being called with the compound_head of the page being marked dirty. We avoid several calls to compound_head(), both statically (through using folio_test_dirty() instead of PageDirty() and dynamically by calling folio_mapping() instead of page_mapping(). Also return bool instead of int to show the range of values actually returned, and add kernel-doc. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 3 ++- mm/folio-compat.c | 6 ++++++ mm/page-writeback.c | 35 +++++++++++++++++++---------------- 3 files changed, 27 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 23276330ef4f..43c1b5731c7f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2005,7 +2005,8 @@ int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); -int set_page_dirty(struct page *page); +bool folio_mark_dirty(struct folio *folio); +bool set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); void __cancel_dirty_page(struct page *page); static inline void cancel_dirty_page(struct page *page) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 10ce5582d869..2c2b3917b5dc 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -77,3 +77,9 @@ bool set_page_writeback(struct page *page) return folio_start_writeback(page_folio(page)); } EXPORT_SYMBOL(set_page_writeback); + +bool set_page_dirty(struct page *page) +{ + return folio_mark_dirty(page_folio(page)); +} +EXPORT_SYMBOL(set_page_dirty); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 0336273154fb..d7c0cad6a57f 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2564,18 +2564,21 @@ int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page) } EXPORT_SYMBOL(redirty_page_for_writepage); -/* - * Dirty a page. +/** + * folio_mark_dirty - Mark a folio as being modified. + * @folio: The folio. + * + * For folios with a mapping this should be done under the page lock + * for the benefit of asynchronous memory errors who prefer a consistent + * dirty state. This rule can be broken in some special cases, + * but should be better not to. * - * For pages with a mapping this should be done under the page lock for the - * benefit of asynchronous memory errors who prefer a consistent dirty state. - * This rule can be broken in some special cases, but should be better not to. + * Return: True if the folio was newly dirtied, false if it was already dirty. */ -int set_page_dirty(struct page *page) +bool folio_mark_dirty(struct folio *folio) { - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio_mapping(folio); - page = compound_head(page); if (likely(mapping)) { /* * readahead/lru_deactivate_page could remain @@ -2587,17 +2590,17 @@ int set_page_dirty(struct page *page) * it will confuse readahead and make it restart the size rampup * process. But it's a trivial problem. */ - if (PageReclaim(page)) - ClearPageReclaim(page); - return mapping->a_ops->set_page_dirty(page); + if (folio_test_reclaim(folio)) + folio_clear_reclaim(folio); + return mapping->a_ops->set_page_dirty(&folio->page); } - if (!PageDirty(page)) { - if (!TestSetPageDirty(page)) - return 1; + if (!folio_test_dirty(folio)) { + if (!folio_test_set_dirty(folio)) + return true; } - return 0; + return false; } -EXPORT_SYMBOL(set_page_dirty); +EXPORT_SYMBOL(folio_mark_dirty); /* * set_page_dirty() is racy if the caller has no reference against From patchwork Thu Jul 15 03:35:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18E57C47E48 for ; Thu, 15 Jul 2021 04:33:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C308061360 for ; Thu, 15 Jul 2021 04:33:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C308061360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 707958D0049; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 629D28D004D; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42E358D0049; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 14C858D004B for ; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 080BC231DD for ; Thu, 15 Jul 2021 04:33:28 +0000 (UTC) X-FDA: 78363553296.11.ECFD07A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id B5A95F000091 for ; Thu, 15 Jul 2021 04:33:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=joBUV28Urnk2XnLKXCBtmxg7bVEhi6265j8u9WsSAaA=; b=g+/k9xyHzFHGFfOzZ73QAd+aRP f4pe1PCZoYewXQVDLhURZLvZV2l3XwEAWtbPEtM15d23UNzjuXn5RQN84uVA0CxXeQDcKqsOQbjYL +BlmV/kyjPQZZzen3AGHX08VE25IDUTTzjMexnkGbhGaL2SV0DhcYieMMc0lmEn5Vzrf8KCdmYJJb 4J6v3JEudmAqbajFreby2HrraD/pEBCQXMipeC+J7qjiKuDn8OuLhlK9R7ABKrmSQKhCyedmG4YoR 0JJgJI8uRIiHacD6/n4uDMaMvJBtIcQr/KgBrnKhrxoQro/HaZykMn3FnEx6rr7WIxvAraf87Qp5f /nVSMDRA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t1n-002xgZ-BD; Thu, 15 Jul 2021 04:31:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 069/138] mm/writeback: Add __folio_mark_dirty() Date: Thu, 15 Jul 2021 04:35:55 +0100 Message-Id: <20210715033704.692967-70-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="g+/k9xyH"; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: c4yz881uhneq871pohk4ig8wxzfgkgby X-Rspamd-Queue-Id: B5A95F000091 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323607-808271 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn __set_page_dirty() into a wrapper around __folio_mark_dirty(). Convert account_page_dirtied() into folio_account_dirtied() and account the number of pages in the folio to support multi-page folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/memcontrol.h | 5 ++--- include/linux/pagemap.h | 7 ++++++- mm/page-writeback.c | 41 +++++++++++++++++++------------------- 3 files changed, 29 insertions(+), 24 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2dd660185bb3..c20adc22ea24 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1574,10 +1574,9 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb); -static inline void mem_cgroup_track_foreign_dirty(struct page *page, +static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, struct bdi_writeback *wb) { - struct folio *folio = page_folio(page); if (mem_cgroup_disabled()) return; @@ -1602,7 +1601,7 @@ static inline void mem_cgroup_wb_stats(struct bdi_writeback *wb, { } -static inline void mem_cgroup_track_foreign_dirty(struct page *page, +static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, struct bdi_writeback *wb) { } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 08f40e004d97..3d88c17fedc9 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -773,8 +773,13 @@ void end_page_writeback(struct page *page); void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); void folio_wait_stable(struct folio *folio); +void __folio_mark_dirty(struct folio *folio, struct address_space *, int warn); +static inline void __set_page_dirty(struct page *page, + struct address_space *mapping, int warn) +{ + __folio_mark_dirty(page_folio(page), mapping, warn); +} -void __set_page_dirty(struct page *, struct address_space *, int warn); int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index d7c0cad6a57f..3e02c86eb445 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2421,29 +2421,30 @@ EXPORT_SYMBOL(__set_page_dirty_no_writeback); * * NOTE: This relies on being atomic wrt interrupts. */ -static void account_page_dirtied(struct page *page, +static void folio_account_dirtied(struct folio *folio, struct address_space *mapping) { struct inode *inode = mapping->host; - trace_writeback_dirty_page(page, mapping); + trace_writeback_dirty_page(&folio->page, mapping); if (mapping_can_writeback(mapping)) { struct bdi_writeback *wb; + long nr = folio_nr_pages(folio); - inode_attach_wb(inode, page); + inode_attach_wb(inode, &folio->page); wb = inode_to_wb(inode); - __inc_lruvec_page_state(page, NR_FILE_DIRTY); - __inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); - __inc_node_page_state(page, NR_DIRTIED); - inc_wb_stat(wb, WB_RECLAIMABLE); - inc_wb_stat(wb, WB_DIRTIED); - task_io_account_write(PAGE_SIZE); - current->nr_dirtied++; - __this_cpu_inc(bdp_ratelimits); + __lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, nr); + __zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, nr); + __node_stat_mod_folio(folio, NR_DIRTIED, nr); + wb_stat_mod(wb, WB_RECLAIMABLE, nr); + wb_stat_mod(wb, WB_DIRTIED, nr); + task_io_account_write(nr * PAGE_SIZE); + current->nr_dirtied += nr; + __this_cpu_add(bdp_ratelimits, nr); - mem_cgroup_track_foreign_dirty(page, wb); + mem_cgroup_track_foreign_dirty(folio, wb); } } @@ -2464,24 +2465,24 @@ void account_page_cleaned(struct page *page, struct address_space *mapping, } /* - * Mark the page dirty, and set it dirty in the page cache, and mark the inode - * dirty. + * Mark the folio dirty, and set it dirty in the page cache, and mark + * the inode dirty. * - * If warn is true, then emit a warning if the page is not uptodate and has + * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * * The caller must hold lock_page_memcg(). */ -void __set_page_dirty(struct page *page, struct address_space *mapping, +void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, int warn) { unsigned long flags; xa_lock_irqsave(&mapping->i_pages, flags); - if (page->mapping) { /* Race with truncate? */ - WARN_ON_ONCE(warn && !PageUptodate(page)); - account_page_dirtied(page, mapping); - __xa_set_mark(&mapping->i_pages, page_index(page), + if (folio->mapping) { /* Race with truncate? */ + WARN_ON_ONCE(warn && !folio_test_uptodate(folio)); + folio_account_dirtied(folio, mapping); + __xa_set_mark(&mapping->i_pages, folio_index(folio), PAGECACHE_TAG_DIRTY); } xa_unlock_irqrestore(&mapping->i_pages, flags); From patchwork Thu Jul 15 03:35:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C6AFC07E96 for ; Thu, 15 Jul 2021 04:33:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 024E36128C for ; Thu, 15 Jul 2021 04:33:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 024E36128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 47F038D004C; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4554E8D004B; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A98B8D004C; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 09DDE8D0049 for ; Thu, 15 Jul 2021 00:33:29 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ED831231D4 for ; Thu, 15 Jul 2021 04:33:27 +0000 (UTC) X-FDA: 78363553254.20.F12A2D1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id B292550000A3 for ; Thu, 15 Jul 2021 04:33:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2QhJOnhbBGpurxBsSiN6RweehqGlvEhV4vRfvwJDCqY=; b=VdhdcYNoV1MkrQ3VBH1GgzOKgF A/wRWT70Vs+uOEAt3Sfc5BSBws1S85UCBZrX0AVMJvA/QbHGQfoo2fPAanOHtwiQvSMG98grPpk8l H9ehxdkac23hfRq5Die4zaNLxrWrp/YhYGIJvZnsFMJJbvHJbdzQRrARCqDZss9YJa9vso+kPfIAz iJyxSF2nYiqoEUYygAGLKtdC8ur2pNUPjjr8iVnX5LGLxyLI9WLO2PX25G0mpVuh05vwqEWXVfCWp vnfNBFBZFsKR2Mim/KBk5T++66idN9Lv6+sFIKen1QEQzpEHkfzRVq29UXEjYt7IMq1xvYcgdIki/ yc64ZFvA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t2p-002xjW-Bw; Thu, 15 Jul 2021 04:32:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 070/138] mm/writeback: Convert tracing writeback_page_template to folios Date: Thu, 15 Jul 2021 04:35:56 +0100 Message-Id: <20210715033704.692967-71-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VdhdcYNo; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 5s8kq1jseb539mijjyynzsr18wukfo8m X-Rspamd-Queue-Id: B292550000A3 X-HE-Tag: 1626323607-774967 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename writeback_dirty_page() to writeback_dirty_folio() and wait_on_page_writeback() to folio_wait_writeback(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/trace/events/writeback.h | 20 ++++++++++---------- mm/page-writeback.c | 6 +++--- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index 297871ca0004..7dccb66474f7 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -52,11 +52,11 @@ WB_WORK_REASON struct wb_writeback_work; -DECLARE_EVENT_CLASS(writeback_page_template, +DECLARE_EVENT_CLASS(writeback_folio_template, - TP_PROTO(struct page *page, struct address_space *mapping), + TP_PROTO(struct folio *folio, struct address_space *mapping), - TP_ARGS(page, mapping), + TP_ARGS(folio, mapping), TP_STRUCT__entry ( __array(char, name, 32) @@ -69,7 +69,7 @@ DECLARE_EVENT_CLASS(writeback_page_template, bdi_dev_name(mapping ? inode_to_bdi(mapping->host) : NULL), 32); __entry->ino = mapping ? mapping->host->i_ino : 0; - __entry->index = page->index; + __entry->index = folio->index; ), TP_printk("bdi %s: ino=%lu index=%lu", @@ -79,18 +79,18 @@ DECLARE_EVENT_CLASS(writeback_page_template, ) ); -DEFINE_EVENT(writeback_page_template, writeback_dirty_page, +DEFINE_EVENT(writeback_folio_template, writeback_dirty_folio, - TP_PROTO(struct page *page, struct address_space *mapping), + TP_PROTO(struct folio *folio, struct address_space *mapping), - TP_ARGS(page, mapping) + TP_ARGS(folio, mapping) ); -DEFINE_EVENT(writeback_page_template, wait_on_page_writeback, +DEFINE_EVENT(writeback_folio_template, folio_wait_writeback, - TP_PROTO(struct page *page, struct address_space *mapping), + TP_PROTO(struct folio *folio, struct address_space *mapping), - TP_ARGS(page, mapping) + TP_ARGS(folio, mapping) ); DECLARE_EVENT_CLASS(writeback_dirty_inode_template, diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 3e02c86eb445..2dc410b110ff 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2426,7 +2426,7 @@ static void folio_account_dirtied(struct folio *folio, { struct inode *inode = mapping->host; - trace_writeback_dirty_page(&folio->page, mapping); + trace_writeback_dirty_folio(folio, mapping); if (mapping_can_writeback(mapping)) { struct bdi_writeback *wb; @@ -2852,7 +2852,7 @@ EXPORT_SYMBOL(__folio_start_writeback); void folio_wait_writeback(struct folio *folio) { while (folio_test_writeback(folio)) { - trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + trace_folio_wait_writeback(folio, folio_mapping(folio)); folio_wait_bit(folio, PG_writeback); } } @@ -2874,7 +2874,7 @@ EXPORT_SYMBOL_GPL(folio_wait_writeback); int folio_wait_writeback_killable(struct folio *folio) { while (folio_test_writeback(folio)) { - trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); + trace_folio_wait_writeback(folio, folio_mapping(folio)); if (folio_wait_bit_killable(folio, PG_writeback)) return -EINTR; } From patchwork Thu Jul 15 03:35:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 263FAC07E96 for ; Thu, 15 Jul 2021 04:34:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B50996128C for ; Thu, 15 Jul 2021 04:34:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B50996128C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FA978D004D; Thu, 15 Jul 2021 00:34:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AAC68D004B; Thu, 15 Jul 2021 00:34:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDBE18D004D; Thu, 15 Jul 2021 00:34:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id CB8068D004B for ; Thu, 15 Jul 2021 00:34:45 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BCA40824556B for ; Thu, 15 Jul 2021 04:34:44 +0000 (UTC) X-FDA: 78363556488.26.8CB2384 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 7489FD0000A7 for ; Thu, 15 Jul 2021 04:34:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=q4HzUntlNBnpuwySZABxW5Hl5awpEdtX1/0gp7hxQgQ=; b=sPAg9ML4h3aKe1UDXJtvkOYDke 9IWmEYwZNLPfYFUK85vJoE/TV/2yQFz7zd2uA3nKTyv60lOw+XydyjWoo2ZMg6ACKRtcZC/rogOtf Ndsw9W2LPESE6EOxHoKfaZv6VA+FIgsw28Yy1ZDc90iw6iGwUxqU9gDM0PqdkeUTTkYlVbChuE1AA Xe/rK16qXEusv7jjnpcfLdD0FkPLBsN2isO0KWIiSzgoOBzjNMrAx7jFtpDvr4nUHYn+X7aKTzLDm W5QZN0d0vCjJgzngP6yK/ugjhqrrnze8O0uLBythaFo0MNNnC0CyhxjIvrwoE3f1nB3CvrGyfc4x5 0b+xcyNg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t3J-002xmI-Ux; Thu, 15 Jul 2021 04:33:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 071/138] mm/writeback: Add filemap_dirty_folio() Date: Thu, 15 Jul 2021 04:35:57 +0100 Message-Id: <20210715033704.692967-72-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7489FD0000A7 X-Stat-Signature: xwem8r9abjypuxknnqb4cux9i5oewu3u Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sPAg9ML4; dmarc=none; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626323684-837631 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement __set_page_dirty_nobuffers() as a wrapper around filemap_dirty_folio(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/writeback.h | 1 + mm/folio-compat.c | 6 ++++ mm/page-writeback.c | 60 ++++++++++++++++++++------------------- 3 files changed, 38 insertions(+), 29 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 667e86cfbdcf..eda9cc778ef6 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -398,6 +398,7 @@ void writeback_set_ratelimit(void); void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end); +bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio); void account_page_redirty(struct page *page); void sb_mark_inode_writeback(struct inode *inode); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 2c2b3917b5dc..dad962b920e5 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -83,3 +83,9 @@ bool set_page_dirty(struct page *page) return folio_mark_dirty(page_folio(page)); } EXPORT_SYMBOL(set_page_dirty); + +int __set_page_dirty_nobuffers(struct page *page) +{ + return filemap_dirty_folio(page_mapping(page), page_folio(page)); +} +EXPORT_SYMBOL(__set_page_dirty_nobuffers); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 2dc410b110ff..bd97c461d499 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2488,41 +2488,43 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping, xa_unlock_irqrestore(&mapping->i_pages, flags); } -/* - * For address_spaces which do not use buffers. Just tag the page as dirty in - * the xarray. - * - * This is also used when a single buffer is being dirtied: we want to set the - * page dirty in that case, but not all the buffers. This is a "bottom-up" - * dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying. - * - * The caller must ensure this doesn't race with truncation. Most will simply - * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and - * the pte lock held, which also locks out truncation. +/** + * filemap_dirty_folio - Mark a folio dirty for filesystems which do not use buffer_heads. + * @mapping: Address space this folio belongs to. + * @folio: Folio to be marked as dirty. + * + * Filesystems which do not use buffer heads should call this function + * from their set_page_dirty address space operation. It ignores the + * contents of folio_get_private(), so if the filesystem marks individual + * blocks as dirty, the filesystem should handle that itself. + * + * This is also sometimes used by filesystems which use buffer_heads when + * a single buffer is being dirtied: we want to set the folio dirty in + * that case, but not all the buffers. This is a "bottom-up" dirtying, + * whereas __set_page_dirty_buffers() is a "top-down" dirtying. + * + * The caller must ensure this doesn't race with truncation. Most will + * simply hold the folio lock, but e.g. zap_pte_range() calls with the + * folio mapped and the pte lock held, which also locks out truncation. */ -int __set_page_dirty_nobuffers(struct page *page) +bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) { - lock_page_memcg(page); - if (!TestSetPageDirty(page)) { - struct address_space *mapping = page_mapping(page); + folio_memcg_lock(folio); + if (folio_test_set_dirty(folio)) { + folio_memcg_unlock(folio); + return false; + } - if (!mapping) { - unlock_page_memcg(page); - return 1; - } - __set_page_dirty(page, mapping, !PagePrivate(page)); - unlock_page_memcg(page); + __folio_mark_dirty(folio, mapping, !folio_test_private(folio)); + folio_memcg_unlock(folio); - if (mapping->host) { - /* !PageAnon && !swapper_space */ - __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); - } - return 1; + if (mapping->host) { + /* !PageAnon && !swapper_space */ + __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } - unlock_page_memcg(page); - return 0; + return true; } -EXPORT_SYMBOL(__set_page_dirty_nobuffers); +EXPORT_SYMBOL(filemap_dirty_folio); /* * Call this whenever redirtying a page, to de-account the dirty counters From patchwork Thu Jul 15 03:35:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EEE6C07E96 for ; Thu, 15 Jul 2021 04:35:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D248661360 for ; Thu, 15 Jul 2021 04:35:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D248661360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25A158D004E; Thu, 15 Jul 2021 00:35:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2307E8D004B; Thu, 15 Jul 2021 00:35:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AAD68D004E; Thu, 15 Jul 2021 00:35:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id D380D8D004B for ; Thu, 15 Jul 2021 00:35:20 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C8DD31F862 for ; Thu, 15 Jul 2021 04:35:19 +0000 (UTC) X-FDA: 78363557958.09.2785C1C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 8B09130000B0 for ; Thu, 15 Jul 2021 04:35:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nAVhJhPgs9JCo9GLJfkkyNHsB4kGgY5zRUqQ0AFIy8o=; b=A2cOKevTGGoSh+hykqhWlN50kO ucJk3k9+eWpKUMuItWU1RR/y8jNTBgBjBTIEJ8cjulz9bYHqxXFDYxzyh8SIY2IBfLQ1f1wkHsiXI yHpuIcwOtMFbgZPJoo1VJsoy5OyadcSOItzfEB39C0r6UjrRflLktKvorgoycQ5cmhpugqtnip5v2 ojB76x0s2dXKh9WWPecrZDUOgoPt8p557S4RVIlCktu+cbdEtmT97Ec6lef3Y/lQOaHtZ6LFOlONe PUSMvdq4NseNh36gCv9mHT13GVJnGySGoI6pTnu3qrgBhYeE3i8KQj5X/AR3zDEtrxw7JrbPwM/Wp 0oOf5Yng==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t49-002xp0-92; Thu, 15 Jul 2021 04:33:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 072/138] mm/writeback: Add folio_account_cleaned() Date: Thu, 15 Jul 2021 04:35:58 +0100 Message-Id: <20210715033704.692967-73-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=A2cOKevT; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: dtgqgo4xqpzbak7zd3ikzhn4otou3zhj X-Rspamd-Queue-Id: 8B09130000B0 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323719-388713 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Get the statistics right; compound pages were being accounted as a single page. This didn't matter before now as no filesystem which supported compound pages did writeback. Also move the declaration to filemap.h since this is part of the page cache. Add a wrapper for account_page_cleaned(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 3 --- include/linux/pagemap.h | 7 +++++++ mm/page-writeback.c | 11 ++++++----- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 43c1b5731c7f..481019481d10 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -39,7 +39,6 @@ struct anon_vma_chain; struct file_ra_state; struct user_struct; struct writeback_control; -struct bdi_writeback; struct pt_regs; extern int sysctl_page_lock_unfairness; @@ -2003,8 +2002,6 @@ extern void do_invalidatepage(struct page *page, unsigned int offset, int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page); -void account_page_cleaned(struct page *page, struct address_space *mapping, - struct bdi_writeback *wb); bool folio_mark_dirty(struct folio *folio); bool set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 3d88c17fedc9..665ba6a67385 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -779,6 +779,13 @@ static inline void __set_page_dirty(struct page *page, { __folio_mark_dirty(page_folio(page), mapping, warn); } +void folio_account_cleaned(struct folio *folio, struct address_space *mapping, + struct bdi_writeback *wb); +static inline void account_page_cleaned(struct page *page, + struct address_space *mapping, struct bdi_writeback *wb) +{ + return folio_account_cleaned(page_folio(page), mapping, wb); +} int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index bd97c461d499..792a83bd3917 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2453,14 +2453,15 @@ static void folio_account_dirtied(struct folio *folio, * * Caller must hold lock_page_memcg(). */ -void account_page_cleaned(struct page *page, struct address_space *mapping, +void folio_account_cleaned(struct folio *folio, struct address_space *mapping, struct bdi_writeback *wb) { if (mapping_can_writeback(mapping)) { - dec_lruvec_page_state(page, NR_FILE_DIRTY); - dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); - dec_wb_stat(wb, WB_RECLAIMABLE); - task_io_account_cancelled_write(PAGE_SIZE); + long nr = folio_nr_pages(folio); + lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, -nr); + zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr); + wb_stat_mod(wb, WB_RECLAIMABLE, -nr); + task_io_account_cancelled_write(folio_size(folio)); } } From patchwork Thu Jul 15 03:35:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89A46C07E96 for ; Thu, 15 Jul 2021 04:36:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D881611C0 for ; Thu, 15 Jul 2021 04:36:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D881611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 628D58D004F; Thu, 15 Jul 2021 00:36:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D8748D004B; Thu, 15 Jul 2021 00:36:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47A3C8D004F; Thu, 15 Jul 2021 00:36:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 19E278D004B for ; Thu, 15 Jul 2021 00:36:16 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0F6021F07E for ; Thu, 15 Jul 2021 04:36:15 +0000 (UTC) X-FDA: 78363560310.37.5C8A00E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id BEAF660019B9 for ; Thu, 15 Jul 2021 04:36:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=g0o7Cuq8kb/CSOO/bLXZOffqZaroMfg4j0pCL7GsUjY=; b=obxFWfRClXebYlOOgsjmRTZdUf 6+Aj5k/3YkPx2uQOlYEdWoAJ5gshzedZNMojYQPXYNyJjnRSdKn4gcL03I1VBwlCP+ViBlprTUxE4 /Fyys+RFKxY+AUqa+0t3vBN+VdwCSlmUy4U39j8V9iN/BjoJc+TqxFlduNv3SdVWvnPLyVFDGuR+t 6nbyPti4SsyiOX9S/lRoyFN7cnScuAjSIe7zykeDSV+AxLBrNSFqBD7DKJPqv+s9r6d4tz3we+/BR ZgUxLHdd0H/oPfLXNhcywT1LL9YbqyDHK0W6+zDvvbg05YvrqmTZQJOjY9LJm2pxW1Jzzbz1hWsG3 scoa2oew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t5E-002xse-2T; Thu, 15 Jul 2021 04:35:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 073/138] mm/writeback: Add folio_cancel_dirty() Date: Thu, 15 Jul 2021 04:35:59 +0100 Message-Id: <20210715033704.692967-74-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=obxFWfRC; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 3uzr5r94jw8uwtg3ro4siuc4irho519d X-Rspamd-Queue-Id: BEAF660019B9 X-Rspamd-Server: rspam01 X-HE-Tag: 1626323774-582064 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn __cancel_dirty_page() into __folio_cancel_dirty() and add wrappers. Move the prototypes into pagemap.h since this is page cache functionality. Saves 44 bytes of kernel text in total; 33 bytes from __folio_cancel_dirty and 11 from two callers of cancel_dirty_page(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 7 ------- include/linux/pagemap.h | 11 +++++++++++ mm/page-writeback.c | 16 ++++++++-------- 3 files changed, 19 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 481019481d10..07ba22351d15 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2005,13 +2005,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, bool folio_mark_dirty(struct folio *folio); bool set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); -void __cancel_dirty_page(struct page *page); -static inline void cancel_dirty_page(struct page *page) -{ - /* Avoid atomic ops, locking, etc. when not actually needed. */ - if (PageDirty(page)) - __cancel_dirty_page(page); -} int clear_page_dirty_for_io(struct page *page); int get_cmdline(struct task_struct *task, char *buffer, int buflen); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 665ba6a67385..a4d0aeaf884d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -786,6 +786,17 @@ static inline void account_page_cleaned(struct page *page, { return folio_account_cleaned(page_folio(page), mapping, wb); } +void __folio_cancel_dirty(struct folio *folio); +static inline void folio_cancel_dirty(struct folio *folio) +{ + /* Avoid atomic ops, locking, etc. when not actually needed. */ + if (folio_test_dirty(folio)) + __folio_cancel_dirty(folio); +} +static inline void cancel_dirty_page(struct page *page) +{ + folio_cancel_dirty(page_folio(page)); +} int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 792a83bd3917..0854ef768d06 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2640,28 +2640,28 @@ EXPORT_SYMBOL(set_page_dirty_lock); * page without actually doing it through the VM. Can you say "ext3 is * horribly ugly"? Thought you could. */ -void __cancel_dirty_page(struct page *page) +void __folio_cancel_dirty(struct folio *folio) { - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio_mapping(folio); if (mapping_can_writeback(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; struct wb_lock_cookie cookie = {}; - lock_page_memcg(page); + folio_memcg_lock(folio); wb = unlocked_inode_to_wb_begin(inode, &cookie); - if (TestClearPageDirty(page)) - account_page_cleaned(page, mapping, wb); + if (folio_test_clear_dirty(folio)) + folio_account_cleaned(folio, mapping, wb); unlocked_inode_to_wb_end(inode, &cookie); - unlock_page_memcg(page); + folio_memcg_unlock(folio); } else { - ClearPageDirty(page); + folio_clear_dirty(folio); } } -EXPORT_SYMBOL(__cancel_dirty_page); +EXPORT_SYMBOL(__folio_cancel_dirty); /* * Clear a page's dirty flag, while caring for dirty memory accounting. From patchwork Thu Jul 15 03:36:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1A6FC07E96 for ; Thu, 15 Jul 2021 04:36:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6563E611C0 for ; Thu, 15 Jul 2021 04:36:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6563E611C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A95098D0050; Thu, 15 Jul 2021 00:36:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1EDD8D004B; Thu, 15 Jul 2021 00:36:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BF7F8D0050; Thu, 15 Jul 2021 00:36:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 624288D004B for ; Thu, 15 Jul 2021 00:36:53 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 504EF184B40DE for ; Thu, 15 Jul 2021 04:36:52 +0000 (UTC) X-FDA: 78363561864.19.CB207CF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 012B03000103 for ; Thu, 15 Jul 2021 04:36:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LOdAj6m/Go6FeAp85JxhoVjMMgsQBkMqwXXSPS9xFA8=; b=BTmhoOD5q8m8gTaLgneQPRfYIZ in/NHST8aPlXYQWhWC1DycszW9aUV9C5YYmwA7CPGTzzMPm2q+oeDDDmR97dPKM0v7JZI+7NH0mof m3ZtnogHa9pOm7CIov6CcRk4Vf+LyeatlbD1Ui8/WeLU75vBWU03yWiB6HmGt9gKfSTCmqGP1dZ0n usDSzHdqNBvaD4FT1TAYHu1xi92tJbBqYDgLIN2IoXEVQzXVyhoh2HKcpg+jUOa52qzlhvF81WHES 672Z4WBRPWZm0JE12vr+b3KkLWCqZYlWkMlCOrvsayVBH6bJnQrxyKkTwYKWBaSbtiARyLdukBvTS +mXC+TJw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t5r-002xvL-MK; Thu, 15 Jul 2021 04:35:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 074/138] mm/writeback: Add folio_clear_dirty_for_io() Date: Thu, 15 Jul 2021 04:36:00 +0100 Message-Id: <20210715033704.692967-75-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 012B03000103 X-Stat-Signature: 6ufawrxoswjdpj3kpb6ec3d967dqqcnm Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BTmhoOD5; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626323811-790466 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Transform clear_page_dirty_for_io() into folio_clear_dirty_for_io() and add a compatibility wrapper. Also move the declaration to pagemap.h as this is page cache functionality that doesn't need to be used by the rest of the kernel. Increases the size of the kernel by 79 bytes. While we remove a few calls to compound_head(), we add a call to folio_nr_pages() to get the stats correct for the eventual support of multi-page folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/mm.h | 1 - include/linux/pagemap.h | 2 ++ mm/folio-compat.c | 6 ++++ mm/page-writeback.c | 63 +++++++++++++++++++++-------------------- 4 files changed, 40 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 07ba22351d15..26883ea28349 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2005,7 +2005,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, bool folio_mark_dirty(struct folio *folio); bool set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); -int clear_page_dirty_for_io(struct page *page); int get_cmdline(struct task_struct *task, char *buffer, int buflen); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a4d0aeaf884d..006de2d84d06 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -797,6 +797,8 @@ static inline void cancel_dirty_page(struct page *page) { folio_cancel_dirty(page_folio(page)); } +bool folio_clear_dirty_for_io(struct folio *folio); +bool clear_page_dirty_for_io(struct page *page); int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index dad962b920e5..39f5a8d963b1 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -89,3 +89,9 @@ int __set_page_dirty_nobuffers(struct page *page) return filemap_dirty_folio(page_mapping(page), page_folio(page)); } EXPORT_SYMBOL(__set_page_dirty_nobuffers); + +bool clear_page_dirty_for_io(struct page *page) +{ + return folio_clear_dirty_for_io(page_folio(page)); +} +EXPORT_SYMBOL(clear_page_dirty_for_io); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 0854ef768d06..66060bbf6aad 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2664,25 +2664,25 @@ void __folio_cancel_dirty(struct folio *folio) EXPORT_SYMBOL(__folio_cancel_dirty); /* - * Clear a page's dirty flag, while caring for dirty memory accounting. - * Returns true if the page was previously dirty. - * - * This is for preparing to put the page under writeout. We leave the page - * tagged as dirty in the xarray so that a concurrent write-for-sync - * can discover it via a PAGECACHE_TAG_DIRTY walk. The ->writepage - * implementation will run either set_page_writeback() or set_page_dirty(), - * at which stage we bring the page's dirty flag and xarray dirty tag - * back into sync. - * - * This incoherency between the page's dirty flag and xarray tag is - * unfortunate, but it only exists while the page is locked. + * Clear a folio's dirty flag, while caring for dirty memory accounting. + * Returns true if the folio was previously dirty. + * + * This is for preparing to put the folio under writeout. We leave + * the folio tagged as dirty in the xarray so that a concurrent + * write-for-sync can discover it via a PAGECACHE_TAG_DIRTY walk. + * The ->writepage implementation will run either folio_start_writeback() + * or folio_mark_dirty(), at which stage we bring the folio's dirty flag + * and xarray dirty tag back into sync. + * + * This incoherency between the folio's dirty flag and xarray tag is + * unfortunate, but it only exists while the folio is locked. */ -int clear_page_dirty_for_io(struct page *page) +bool folio_clear_dirty_for_io(struct folio *folio) { - struct address_space *mapping = page_mapping(page); - int ret = 0; + struct address_space *mapping = folio_mapping(folio); + bool ret = false; - VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); if (mapping && mapping_can_writeback(mapping)) { struct inode *inode = mapping->host; @@ -2695,48 +2695,49 @@ int clear_page_dirty_for_io(struct page *page) * We use this sequence to make sure that * (a) we account for dirty stats properly * (b) we tell the low-level filesystem to - * mark the whole page dirty if it was + * mark the whole folio dirty if it was * dirty in a pagetable. Only to then - * (c) clean the page again and return 1 to + * (c) clean the folio again and return 1 to * cause the writeback. * * This way we avoid all nasty races with the * dirty bit in multiple places and clearing * them concurrently from different threads. * - * Note! Normally the "set_page_dirty(page)" + * Note! Normally the "folio_mark_dirty(folio)" * has no effect on the actual dirty bit - since * that will already usually be set. But we * need the side effects, and it can help us * avoid races. * - * We basically use the page "master dirty bit" + * We basically use the folio "master dirty bit" * as a serialization point for all the different * threads doing their things. */ - if (page_mkclean(page)) - set_page_dirty(page); + if (folio_mkclean(folio)) + folio_mark_dirty(folio); /* * We carefully synchronise fault handlers against - * installing a dirty pte and marking the page dirty + * installing a dirty pte and marking the folio dirty * at this point. We do this by having them hold the - * page lock while dirtying the page, and pages are + * page lock while dirtying the folio, and folios are * always locked coming in here, so we get the desired * exclusion. */ wb = unlocked_inode_to_wb_begin(inode, &cookie); - if (TestClearPageDirty(page)) { - dec_lruvec_page_state(page, NR_FILE_DIRTY); - dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); - dec_wb_stat(wb, WB_RECLAIMABLE); - ret = 1; + if (folio_test_clear_dirty(folio)) { + long nr = folio_nr_pages(folio); + lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, -nr); + zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr); + wb_stat_mod(wb, WB_RECLAIMABLE, -nr); + ret = true; } unlocked_inode_to_wb_end(inode, &cookie); return ret; } - return TestClearPageDirty(page); + return folio_test_clear_dirty(folio); } -EXPORT_SYMBOL(clear_page_dirty_for_io); +EXPORT_SYMBOL(folio_clear_dirty_for_io); bool __folio_end_writeback(struct folio *folio) { From patchwork Thu Jul 15 03:36:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CA9DC07E96 for ; Thu, 15 Jul 2021 04:38:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1B40D61362 for ; Thu, 15 Jul 2021 04:38:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1B40D61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 674598D0051; Thu, 15 Jul 2021 00:38:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 624A38D004B; Thu, 15 Jul 2021 00:38:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C63F8D0051; Thu, 15 Jul 2021 00:38:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0036.hostedemail.com [216.40.44.36]) by kanga.kvack.org (Postfix) with ESMTP id 28BCD8D004B for ; Thu, 15 Jul 2021 00:38:02 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0D8B92310F for ; Thu, 15 Jul 2021 04:38:01 +0000 (UTC) X-FDA: 78363564762.24.2666786 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 8C31090000AC for ; Thu, 15 Jul 2021 04:38:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FJNNTgCEO41RiXCkSdzBlrn9f3HKqB/HsNggOriRtJc=; b=JBa3iepO0o6YHogV7zZqjzGp8a TnYRC8tuKLn1KdAj+3hddJRAY03RT+oXXIfg95nLZy4b5T1jhArHV9qel6aXhtzAhOErEGq4E12PO 0fJJNuhcLQkL6yKTRcTBS07YnO0M3/On19vndR7BjzDeSaBGxktDHGFCxmarpw7gEyN+/fVyTchi0 4XlOlnLWHEUqLSBahLtdNmtTkOvwjaXaS10zBUvl08fWk6p696hIOx0vTSu8k9lLl0CQdhCPD0VEo 41qfiFPOfY+xD779YwTGwlbAYdM9Nv84H7L5YYSL8yqDvGPYfgh4mqtSkHyYu6gyMhxqUctH0H21k zVPOhY8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t6f-002xxD-G6; Thu, 15 Jul 2021 04:36:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 075/138] mm/writeback: Add folio_account_redirty() Date: Thu, 15 Jul 2021 04:36:01 +0100 Message-Id: <20210715033704.692967-76-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JBa3iepO; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 77xk7mto7d4u8jb3teqmumkb5ie3z6mz X-Rspamd-Queue-Id: 8C31090000AC X-HE-Tag: 1626323880-326919 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Account the number of pages in the folio that we're redirtying. Turn account_page_dirty() into a wrapper around it. Also turn the comment on folio_account_redirty() into kernel-doc and edit it slightly so it makes sense to its potential callers. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/writeback.h | 6 +++++- mm/page-writeback.c | 32 +++++++++++++++++++------------- 2 files changed, 24 insertions(+), 14 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index eda9cc778ef6..50cb6e25ab9e 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -399,7 +399,11 @@ void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end); bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio); -void account_page_redirty(struct page *page); +void folio_account_redirty(struct folio *folio); +static inline void account_page_redirty(struct page *page) +{ + folio_account_redirty(page_folio(page)); +} void sb_mark_inode_writeback(struct inode *inode); void sb_clear_inode_writeback(struct inode *inode); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 66060bbf6aad..d7bd5580c91e 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1084,7 +1084,7 @@ static void wb_update_write_bandwidth(struct bdi_writeback *wb, * write_bandwidth = --------------------------------------------------- * period * - * @written may have decreased due to account_page_redirty(). + * @written may have decreased due to folio_account_redirty(). * Avoid underflowing @bw calculation. */ bw = written - min(written, wb->written_stamp); @@ -2527,30 +2527,36 @@ bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) } EXPORT_SYMBOL(filemap_dirty_folio); -/* - * Call this whenever redirtying a page, to de-account the dirty counters - * (NR_DIRTIED, WB_DIRTIED, tsk->nr_dirtied), so that they match the written - * counters (NR_WRITTEN, WB_WRITTEN) in long term. The mismatches will lead to - * systematic errors in balanced_dirty_ratelimit and the dirty pages position - * control. +/** + * folio_account_redirty - Manually account for redirtying a page. + * @folio: The folio which is being redirtied. + * + * Most filesystems should call folio_redirty_for_writepage() instead + * of this fuction. If your filesystem is doing writeback outside the + * context of a writeback_control(), it can call this when redirtying + * a folio, to de-account the dirty counters (NR_DIRTIED, WB_DIRTIED, + * tsk->nr_dirtied), so that they match the written counters (NR_WRITTEN, + * WB_WRITTEN) in long term. The mismatches will lead to systematic errors + * in balanced_dirty_ratelimit and the dirty pages position control. */ -void account_page_redirty(struct page *page) +void folio_account_redirty(struct folio *folio) { - struct address_space *mapping = page->mapping; + struct address_space *mapping = folio->mapping; if (mapping && mapping_can_writeback(mapping)) { struct inode *inode = mapping->host; struct bdi_writeback *wb; struct wb_lock_cookie cookie = {}; + unsigned nr = folio_nr_pages(folio); wb = unlocked_inode_to_wb_begin(inode, &cookie); - current->nr_dirtied--; - dec_node_page_state(page, NR_DIRTIED); - dec_wb_stat(wb, WB_DIRTIED); + current->nr_dirtied -= nr; + node_stat_mod_folio(folio, NR_DIRTIED, -nr); + wb_stat_mod(wb, WB_DIRTIED, -nr); unlocked_inode_to_wb_end(inode, &cookie); } } -EXPORT_SYMBOL(account_page_redirty); +EXPORT_SYMBOL(folio_account_redirty); /* * When a writepage implementation decides that it doesn't want to write this From patchwork Thu Jul 15 03:36:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4473C47E48 for ; Thu, 15 Jul 2021 04:39:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7EED261360 for ; Thu, 15 Jul 2021 04:39:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7EED261360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CB4928D0052; Thu, 15 Jul 2021 00:39:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8B5C8D004B; Thu, 15 Jul 2021 00:39:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2DA58D0052; Thu, 15 Jul 2021 00:39:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 8F4428D004B for ; Thu, 15 Jul 2021 00:39:04 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 78AA81838A386 for ; Thu, 15 Jul 2021 04:39:03 +0000 (UTC) X-FDA: 78363567366.22.BF223D5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 1DC095012D5D for ; Thu, 15 Jul 2021 04:39:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xSYyCZy/TvQ3u8VOTIJK1lZXh/nrQMsTgRx2OIA2bZw=; b=b6QtwZG9tFujFqlget7DGrKzwT pXdWDhQEUqksMpi87+Smw4taRauOZ3CR8HwvXYKO+/JY5456gHUQhF9+Ck7qLebuaV4ajBmcisEYE XR1WpFszjoR2DOdh0cI7djcGxQOLeZrxEwOjOaMhRfgdyOgH7WVsQ63iHh6tlYuoykfrvEuSGcRUl 5Qcxp/GLLNlnZe4zfX9Wzh0Qoyf+4znFje7DhUePskW+BnC9lLCgP6I6lr0Bvh33Ytqv8fxN1UllE kASPkXj4QK0UBgjfXqHN6tnzM3iNvD400uxM/ZUxGW0F2ck7yxqx37nuUgjxEdkEgjxYaSuEqJrv7 pj4/725g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t7U-002y0P-ST; Thu, 15 Jul 2021 04:37:28 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 076/138] mm/writeback: Add folio_redirty_for_writepage() Date: Thu, 15 Jul 2021 04:36:02 +0100 Message-Id: <20210715033704.692967-77-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=b6QtwZG9; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1DC095012D5D X-Stat-Signature: 5o5p84djgkqdmpagku9apunhjbjgeq96 X-HE-Tag: 1626323942-506164 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement redirty_page_for_writepage() as a wrapper around folio_redirty_for_writepage(). Account the number of pages in the folio, add kernel-doc and move the prototype to writeback.h. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- fs/jfs/jfs_metapage.c | 1 + include/linux/mm.h | 4 ---- include/linux/writeback.h | 2 ++ mm/folio-compat.c | 7 +++++++ mm/page-writeback.c | 30 ++++++++++++++++++++---------- 5 files changed, 30 insertions(+), 14 deletions(-) diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index 176580f54af9..104ae698443e 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "jfs_incore.h" #include "jfs_superblock.h" #include "jfs_filsys.h" diff --git a/include/linux/mm.h b/include/linux/mm.h index 26883ea28349..4803f2c01367 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -36,9 +36,7 @@ struct mempolicy; struct anon_vma; struct anon_vma_chain; -struct file_ra_state; struct user_struct; -struct writeback_control; struct pt_regs; extern int sysctl_page_lock_unfairness; @@ -2000,8 +1998,6 @@ extern int try_to_release_page(struct page * page, gfp_t gfp_mask); extern void do_invalidatepage(struct page *page, unsigned int offset, unsigned int length); -int redirty_page_for_writepage(struct writeback_control *wbc, - struct page *page); bool folio_mark_dirty(struct folio *folio); bool set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 50cb6e25ab9e..5383f7e39816 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -404,6 +404,8 @@ static inline void account_page_redirty(struct page *page) { folio_account_redirty(page_folio(page)); } +bool folio_redirty_for_writepage(struct writeback_control *, struct folio *); +bool redirty_page_for_writepage(struct writeback_control *, struct page *); void sb_mark_inode_writeback(struct inode *inode); void sb_clear_inode_writeback(struct inode *inode); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 39f5a8d963b1..c1e01bc36d32 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -95,3 +95,10 @@ bool clear_page_dirty_for_io(struct page *page) return folio_clear_dirty_for_io(page_folio(page)); } EXPORT_SYMBOL(clear_page_dirty_for_io); + +bool redirty_page_for_writepage(struct writeback_control *wbc, + struct page *page) +{ + return folio_redirty_for_writepage(wbc, page_folio(page)); +} +EXPORT_SYMBOL(redirty_page_for_writepage); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index d7bd5580c91e..c2987f05c944 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2558,21 +2558,31 @@ void folio_account_redirty(struct folio *folio) } EXPORT_SYMBOL(folio_account_redirty); -/* - * When a writepage implementation decides that it doesn't want to write this - * page for some reason, it should redirty the locked page via - * redirty_page_for_writepage() and it should then unlock the page and return 0 +/** + * folio_redirty_for_writepage - Decline to write a dirty folio. + * @wbc: The writeback control. + * @folio: The folio. + * + * When a writepage implementation decides that it doesn't want to write + * @folio for some reason, it should call this function, unlock @folio and + * return 0. + * + * Return: True if we redirtied the folio. False if someone else dirtied + * it first. */ -int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page) +bool folio_redirty_for_writepage(struct writeback_control *wbc, + struct folio *folio) { - int ret; + bool ret; + unsigned nr = folio_nr_pages(folio); + + wbc->pages_skipped += nr; + ret = filemap_dirty_folio(folio->mapping, folio); + folio_account_redirty(folio); - wbc->pages_skipped++; - ret = __set_page_dirty_nobuffers(page); - account_page_redirty(page); return ret; } -EXPORT_SYMBOL(redirty_page_for_writepage); +EXPORT_SYMBOL(folio_redirty_for_writepage); /** * folio_mark_dirty - Mark a folio as being modified. From patchwork Thu Jul 15 03:36:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63D46C1B08C for ; Thu, 15 Jul 2021 04:40:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ECF2A61362 for ; Thu, 15 Jul 2021 04:40:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECF2A61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 400118D0053; Thu, 15 Jul 2021 00:40:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D6E68D004B; Thu, 15 Jul 2021 00:40:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29EDE8D0053; Thu, 15 Jul 2021 00:40:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 077158D004B for ; Thu, 15 Jul 2021 00:40:45 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DB454184818D0 for ; Thu, 15 Jul 2021 04:40:44 +0000 (UTC) X-FDA: 78363571608.22.BC28147 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 6683310000A5 for ; Thu, 15 Jul 2021 04:40:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MiJJVUeAUUm2avWMl+iruNtjhb6DvEHcG3LyC3D+Sk0=; b=NVyzW/MQ0CjOGOpiSX8Wi7vfed ToKC8zpoWsOkjxghObCfKRRFZxQ48dyPYnb1mMAKlA6dLhEejWb3k0zr2/F0zp5XZ3XcDfNmKyRrL 3atBQ3VS7ZoNS+17NAF+Nuy7uDeZoD/eM0WYeOmqA45oF1X2nZoqhJgWzbTc3v54GgaWFo9UAngsm J89yxgFNN/3CloQtWZ7uK7JDtSsJdnkgvrTl/1Jnt0u/cWIItC8lCGuRS7kYSIYkZSurb3Vm/zsIZ axUQ50uQSfzod8/8RbWmwrPCW3Hl4Mi0Hq07KJ2tWSI5Xwcw45hnWnk9i1RC1lXSwB5Tvfl7q/cwG 5X+hnt5Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t8o-002y7f-F1; Thu, 15 Jul 2021 04:38:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 077/138] mm/filemap: Add i_blocks_per_folio() Date: Thu, 15 Jul 2021 04:36:03 +0100 Message-Id: <20210715033704.692967-78-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="NVyzW/MQ"; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 5ad4js7gc4mttb4twajzkfh7tgxwcn3j X-Rspamd-Queue-Id: 6683310000A5 X-HE-Tag: 1626324043-311018 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement i_blocks_per_page() as a wrapper around i_blocks_per_folio(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 006de2d84d06..412db88b8d0c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1150,19 +1150,25 @@ static inline int page_mkwrite_check_truncate(struct page *page, } /** - * i_blocks_per_page - How many blocks fit in this page. + * i_blocks_per_folio - How many blocks fit in this folio. * @inode: The inode which contains the blocks. - * @page: The page (head page if the page is a THP). + * @folio: The folio. * - * If the block size is larger than the size of this page, return zero. + * If the block size is larger than the size of this folio, return zero. * - * Context: The caller should hold a refcount on the page to prevent it + * Context: The caller should hold a refcount on the folio to prevent it * from being split. - * Return: The number of filesystem blocks covered by this page. + * Return: The number of filesystem blocks covered by this folio. */ +static inline +unsigned int i_blocks_per_folio(struct inode *inode, struct folio *folio) +{ + return folio_size(folio) >> inode->i_blkbits; +} + static inline unsigned int i_blocks_per_page(struct inode *inode, struct page *page) { - return thp_size(page) >> inode->i_blkbits; + return i_blocks_per_folio(inode, page_folio(page)); } #endif /* _LINUX_PAGEMAP_H */ From patchwork Thu Jul 15 03:36:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A515EC07E96 for ; Thu, 15 Jul 2021 04:41:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4EEC361360 for ; Thu, 15 Jul 2021 04:41:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EEC361360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A40CD8D0054; Thu, 15 Jul 2021 00:41:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A175F8D004B; Thu, 15 Jul 2021 00:41:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E0408D0054; Thu, 15 Jul 2021 00:41:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 6B4468D004B for ; Thu, 15 Jul 2021 00:41:12 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 583BE824556B for ; Thu, 15 Jul 2021 04:41:11 +0000 (UTC) X-FDA: 78363572742.38.9739EE0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id BDCD350000A6 for ; Thu, 15 Jul 2021 04:41:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Y5UinLMttKqw/QQSFxAT646FpHEwMgiRMjMa0npizhw=; b=WFH6dlsELQVVjX+HV0NFGHdH0w eGSPPKkomxBY/it3OaWwIJtUH3cn/z61yOH2qmOC9bnKnvtnrdYsI7VaucKXIf0iN3o/fLuMXQqWN cXmR7nM2/96m5Ef5p1XljULiwp/SvpQpri892bhQWR8ThSYAz7lsfXONLYqW5wXFkuR69bXPRAMfl 2P3W7FLuQqA/pGWPvMQFTniqu9us+wmqZ0MMdu6hUTjcgzyl6k9Lrb7aU/AlJqNNf1a2RHSm995yd wDUtr9Z8BDkH1rsj1Vd0WdH6ECkWYO+ncm3Qj2MwyWhMwK7BLzFi2GeMsXg2MqnC6n0qVdhGFFuWK G0fBBm4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3t9v-002yBw-SA; Thu, 15 Jul 2021 04:40:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 078/138] mm/filemap: Add folio_mkwrite_check_truncate() Date: Thu, 15 Jul 2021 04:36:04 +0100 Message-Id: <20210715033704.692967-79-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WFH6dlsE; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: uzd1a948oemxd4qxyzipnyc96dn7794o X-Rspamd-Queue-Id: BDCD350000A6 X-HE-Tag: 1626324070-797605 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_mkwrite_check_truncate(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells --- include/linux/pagemap.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 412db88b8d0c..18c06c3e42c3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1121,6 +1121,34 @@ static inline unsigned long dir_pages(struct inode *inode) PAGE_SHIFT; } +/** + * folio_mkwrite_check_truncate - check if folio was truncated + * @folio: the folio to check + * @inode: the inode to check the folio against + * + * Return: the number of bytes in the folio up to EOF, + * or -EFAULT if the folio was truncated. + */ +static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio, + struct inode *inode) +{ + loff_t size = i_size_read(inode); + pgoff_t index = size >> PAGE_SHIFT; + size_t offset = offset_in_folio(folio, size); + + if (!folio->mapping) + return -EFAULT; + + /* folio is wholly inside EOF */ + if (folio_next_index(folio) - 1 < index) + return folio_size(folio); + /* folio is wholly past EOF */ + if (folio->index > index || !offset) + return -EFAULT; + /* folio is partially inside EOF */ + return offset; +} + /** * page_mkwrite_check_truncate - check if page was truncated * @page: the page to check From patchwork Thu Jul 15 03:36:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B956C47E48 for ; Thu, 15 Jul 2021 04:41:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F6F861362 for ; Thu, 15 Jul 2021 04:41:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F6F861362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 621188D0055; Thu, 15 Jul 2021 00:41:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F82A8D004B; Thu, 15 Jul 2021 00:41:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C0E58D0055; Thu, 15 Jul 2021 00:41:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 29B3A8D004B for ; Thu, 15 Jul 2021 00:41:51 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F35EA20F0F for ; Thu, 15 Jul 2021 04:41:49 +0000 (UTC) X-FDA: 78363574380.20.E494997 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 85819E00181A for ; Thu, 15 Jul 2021 04:41:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mOZR97vqhEKo31GvUk0y72ueq9pa4KcTUloIAzr8f6s=; b=vsjO4l71AveQPK3b7poPZDY+0m mc6AMqFx5wcad/fdRUbeup8u/oQV529KHDjd+GvuzVkNS6QyDXLH5Vj2KrkN1Zq6jOw3ejAvLgcQD BstOrfcoMcpILjKjR+wKjYHIkkquZ53qEiNvVvAG6jx1Ggvd4bqYDaEkpwVpqGIMMyNNaZtn2tjf1 PI4eOv+x9lu1zn1/Df62VOHlLzRXmwUy9lBQiV05Y60N32TNMJPjYfa7kxynMpKNC5gKr5InHb5Z9 beUL0wZha8sfgYJA2xhYpLaabVVLAHTnf1iXZxYOkLPCGCunxsqglV1XHs4zkU6zwL28SDFwr+eSp ZmQ4rYug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tAo-002yEG-MX; Thu, 15 Jul 2021 04:40:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 079/138] mm/filemap: Add readahead_folio() Date: Thu, 15 Jul 2021 04:36:05 +0100 Message-Id: <20210715033704.692967-80-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vsjO4l71; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: zhqi8peksydiazn3udh9ocun7rpxjiie X-Rspamd-Queue-Id: 85819E00181A X-HE-Tag: 1626324109-761222 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The pointers stored in the page cache are folios, by definition. This change comes with a behaviour change -- callers of readahead_folio() are no longer required to put the page reference themselves. This matches how readpage works, rather than matching how readpages used to work. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 53 +++++++++++++++++++++++++++++------------ 1 file changed, 38 insertions(+), 15 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 18c06c3e42c3..bd4daebaf70e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -988,33 +988,56 @@ void page_cache_async_readahead(struct address_space *mapping, page_cache_async_ra(&ractl, page, req_count); } +static inline struct folio *__readahead_folio(struct readahead_control *ractl) +{ + struct folio *folio; + + BUG_ON(ractl->_batch_count > ractl->_nr_pages); + ractl->_nr_pages -= ractl->_batch_count; + ractl->_index += ractl->_batch_count; + + if (!ractl->_nr_pages) { + ractl->_batch_count = 0; + return NULL; + } + + folio = xa_load(&ractl->mapping->i_pages, ractl->_index); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + ractl->_batch_count = folio_nr_pages(folio); + + return folio; +} + /** * readahead_page - Get the next page to read. - * @rac: The current readahead request. + * @ractl: The current readahead request. * * Context: The page is locked and has an elevated refcount. The caller * should decreases the refcount once the page has been submitted for I/O * and unlock the page once all I/O to that page has completed. * Return: A pointer to the next page, or %NULL if we are done. */ -static inline struct page *readahead_page(struct readahead_control *rac) +static inline struct page *readahead_page(struct readahead_control *ractl) { - struct page *page; + struct folio *folio = __readahead_folio(ractl); - BUG_ON(rac->_batch_count > rac->_nr_pages); - rac->_nr_pages -= rac->_batch_count; - rac->_index += rac->_batch_count; - - if (!rac->_nr_pages) { - rac->_batch_count = 0; - return NULL; - } + return &folio->page; +} - page = xa_load(&rac->mapping->i_pages, rac->_index); - VM_BUG_ON_PAGE(!PageLocked(page), page); - rac->_batch_count = thp_nr_pages(page); +/** + * readahead_folio - Get the next folio to read. + * @ractl: The current readahead request. + * + * Context: The folio is locked. The caller should unlock the folio once + * all I/O to that folio has completed. + * Return: A pointer to the next folio, or %NULL if we are done. + */ +static inline struct folio *readahead_folio(struct readahead_control *ractl) +{ + struct folio *folio = __readahead_folio(ractl); - return page; + folio_put(folio); + return folio; } static inline unsigned int __readahead_batch(struct readahead_control *rac, From patchwork Thu Jul 15 03:36:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16895C07E96 for ; Thu, 15 Jul 2021 04:42:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B309D61360 for ; Thu, 15 Jul 2021 04:42:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B309D61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 166048D0056; Thu, 15 Jul 2021 00:42:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 13D728D004B; Thu, 15 Jul 2021 00:42:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0050D8D0056; Thu, 15 Jul 2021 00:42:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id D371D8D004B for ; Thu, 15 Jul 2021 00:42:28 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C1FDC1801A0A4 for ; Thu, 15 Jul 2021 04:42:27 +0000 (UTC) X-FDA: 78363575934.24.E39BBE0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 4E526F00038F for ; Thu, 15 Jul 2021 04:42:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zKOP8KwzfNUOpMbEVjQJMJQRfPiAH79Vbkb431wFw7E=; b=jFSWcscw2I8pAXyDcBZ0gxD/Xq UuSJfsk5ZftFXeM7sdje6zHZMzDNi/tGrW9JDQNEwfYgTaFTodpPjy5kuSfeK4vKSaiHPLEsMX/NW h9MCV96dPf8io2B+VaTwmgSqp8RNNaG2DE9vc49OWOQ4N8qdTVZtOw0gjcATvQDqQAuzrv8I9TAt3 J1C/b51AzS6x8UMPtHL8aq+4LRr3vXM7yrGgKWfx5t+NWb3AEdEpVP2XcO4KSZMQe5Gwl7EqG8Sb1 luyKKnXTyg36BDmKax9WqZmOAvx46lcqinKhbqwwxfqJnirBgnkDtIhmTEiKpeAOpvsruC/9empmk IZW3iXpg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tBW-002yG8-Tu; Thu, 15 Jul 2021 04:41:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 080/138] mm/workingset: Convert workingset_refault() to take a folio Date: Thu, 15 Jul 2021 04:36:06 +0100 Message-Id: <20210715033704.692967-81-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jFSWcscw; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ripm8uax8yynr5wpagd9ztx6rqg9jjdq X-Rspamd-Queue-Id: 4E526F00038F X-HE-Tag: 1626324147-423867 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This nets us 178 bytes of savings from removing calls to compound_head. The three callers all grow a little, but each of them will be converted to use folios soon, so that's fine. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/swap.h | 4 ++-- mm/filemap.c | 2 +- mm/memory.c | 3 ++- mm/swap.c | 7 +++---- mm/swap_state.c | 2 +- mm/workingset.c | 34 +++++++++++++++++----------------- 6 files changed, 26 insertions(+), 26 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index c7a4c0a5863d..5e01675af7ab 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -329,7 +329,7 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) /* linux/mm/workingset.c */ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); -void workingset_refault(struct page *page, void *shadow); +void workingset_refault(struct folio *folio, void *shadow); void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ @@ -350,7 +350,7 @@ extern unsigned long nr_free_buffer_pages(void); /* linux/mm/swap.c */ extern void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages); -extern void lru_note_cost_page(struct page *); +extern void lru_note_cost_folio(struct folio *); extern void lru_cache_add(struct page *); void mark_page_accessed(struct page *); void folio_mark_accessed(struct folio *); diff --git a/mm/filemap.c b/mm/filemap.c index a74c69a938ab..6bec995e69bd 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -981,7 +981,7 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, */ WARN_ON_ONCE(PageActive(page)); if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page, shadow); + workingset_refault(page_folio(page), shadow); lru_cache_add(page); } return ret; diff --git a/mm/memory.c b/mm/memory.c index 614418e26e2c..627e7836ade6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3538,7 +3538,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) shadow = get_shadow_from_swap_cache(entry); if (shadow) - workingset_refault(page, shadow); + workingset_refault(page_folio(page), + shadow); lru_cache_add(page); diff --git a/mm/swap.c b/mm/swap.c index d32007fe23b3..6e80f30d2e5e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -315,11 +315,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) } while ((lruvec = parent_lruvec(lruvec))); } -void lru_note_cost_page(struct page *page) +void lru_note_cost_folio(struct folio *folio) { - struct folio *folio = page_folio(page); - lru_note_cost(folio_lruvec(folio), - page_is_file_lru(page), thp_nr_pages(page)); + lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio), + folio_nr_pages(folio)); } static void __folio_activate(struct folio *folio, struct lruvec *lruvec) diff --git a/mm/swap_state.c b/mm/swap_state.c index c56aa9ac050d..1a29b4f98208 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -498,7 +498,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, mem_cgroup_swapin_uncharge_swap(entry); if (shadow) - workingset_refault(page, shadow); + workingset_refault(page_folio(page), shadow); /* Caller will initiate read into locked page */ lru_cache_add(page); diff --git a/mm/workingset.c b/mm/workingset.c index 39bb60d50217..10830211a187 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -273,17 +273,17 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) } /** - * workingset_refault - evaluate the refault of a previously evicted page - * @page: the freshly allocated replacement page - * @shadow: shadow entry of the evicted page + * workingset_refault - evaluate the refault of a previously evicted folio + * @page: the freshly allocated replacement folio + * @shadow: shadow entry of the evicted folio * * Calculates and evaluates the refault distance of the previously - * evicted page in the context of the node and the memcg whose memory + * evicted folio in the context of the node and the memcg whose memory * pressure caused the eviction. */ -void workingset_refault(struct page *page, void *shadow) +void workingset_refault(struct folio *folio, void *shadow) { - bool file = page_is_file_lru(page); + bool file = folio_is_file_lru(folio); struct mem_cgroup *eviction_memcg; struct lruvec *eviction_lruvec; unsigned long refault_distance; @@ -301,10 +301,10 @@ void workingset_refault(struct page *page, void *shadow) rcu_read_lock(); /* * Look up the memcg associated with the stored ID. It might - * have been deleted since the page's eviction. + * have been deleted since the folio's eviction. * * Note that in rare events the ID could have been recycled - * for a new cgroup that refaults a shared page. This is + * for a new cgroup that refaults a shared folio. This is * impossible to tell from the available data. However, this * should be a rare and limited disturbance, and activations * are always speculative anyway. Ultimately, it's the aging @@ -340,14 +340,14 @@ void workingset_refault(struct page *page, void *shadow) refault_distance = (refault - eviction) & EVICTION_MASK; /* - * The activation decision for this page is made at the level + * The activation decision for this folio is made at the level * where the eviction occurred, as that is where the LRU order - * during page reclaim is being determined. + * during folio reclaim is being determined. * - * However, the cgroup that will own the page is the one that + * However, the cgroup that will own the folio is the one that * is actually experiencing the refault event. */ - memcg = page_memcg(page); + memcg = folio_memcg(folio); lruvec = mem_cgroup_lruvec(memcg, pgdat); inc_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file); @@ -375,15 +375,15 @@ void workingset_refault(struct page *page, void *shadow) if (refault_distance > workingset_size) goto out; - SetPageActive(page); - workingset_age_nonresident(lruvec, thp_nr_pages(page)); + folio_set_active(folio); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + file); - /* Page was active prior to eviction */ + /* Folio was active prior to eviction */ if (workingset) { - SetPageWorkingset(page); + folio_set_workingset(folio); /* XXX: Move to lru_cache_add() when it supports new vs putback */ - lru_note_cost_page(page); + lru_note_cost_folio(folio); inc_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file); } out: From patchwork Thu Jul 15 03:36:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27492C07E96 for ; Thu, 15 Jul 2021 04:43:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B673E61360 for ; Thu, 15 Jul 2021 04:43:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B673E61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 13CFB8D0057; Thu, 15 Jul 2021 00:43:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0ED1A8D004B; Thu, 15 Jul 2021 00:43:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA8AC8D0057; Thu, 15 Jul 2021 00:43:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id BD4828D004B for ; Thu, 15 Jul 2021 00:43:47 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B4F10182AB213 for ; Thu, 15 Jul 2021 04:43:46 +0000 (UTC) X-FDA: 78363579252.16.AD2FB36 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 71412900009F for ; Thu, 15 Jul 2021 04:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iQqpnTwxqRXjD5hes7UseJQTEUIrkCuby3ucZ182rzs=; b=HSemhOs8qSh2odWZyzlZyc6k0t pdrnQ0jZetOK93lZIS4GqKdcF0ZyPZ79pZIygckXG0ejxbNu3ilkL6GXk/D6KAb56LkIFr+KyYNoF 1LaPgydRnMB93/n20oGLf1aAUp0WabmuZjyat32s/H15rz+Rf02BLC1xfuuWa78l5oXn248VONoyt u40VH2hARHWP6BvNlrSljfi9bJBgJ+0kWUeIk3noMALA0zI+xYe7I6jRJoUdf05OEoNw1M5Q6IVYF m+v+Zoqgg1XyoMb2beO+xwJ36dwGCEAweguvpn+XYktu8IjDKkUXCkjAdNMvbeG3sfLZxNuqoVLk8 0nSvwM/w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tCA-002yHx-OE; Thu, 15 Jul 2021 04:42:08 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 081/138] mm: Add folio_evictable() Date: Thu, 15 Jul 2021 04:36:07 +0100 Message-Id: <20210715033704.692967-82-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 71412900009F X-Stat-Signature: 1n6ogjfo9tguxywbkuimfiqk1hq44qwd Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HSemhOs8; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626324226-138210 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_evictable(). Unfortunately, it's different from !folio_test_unevictable(), but I think it's used in places where you have to be a VM expert and can reasonably be expected to know the difference. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- mm/internal.h | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 08e8a28994d1..0910efec5821 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -72,17 +72,28 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices); /** - * page_evictable - test whether a page is evictable - * @page: the page to test + * folio_evictable - Test whether a folio is evictable. + * @folio: The folio to test. * - * Test whether page is evictable--i.e., should be placed on active/inactive - * lists vs unevictable list. - * - * Reasons page might not be evictable: - * (1) page's mapping marked unevictable - * (2) page is part of an mlocked VMA + * Test whether @folio is evictable -- i.e., should be placed on + * active/inactive lists vs unevictable list. * + * Reasons folio might not be evictable: + * 1. folio's mapping marked unevictable + * 2. One of the pages in the folio is part of an mlocked VMA */ +static inline bool folio_evictable(struct folio *folio) +{ + bool ret; + + /* Prevent address_space of inode and swap cache from being freed */ + rcu_read_lock(); + ret = !mapping_unevictable(folio_mapping(folio)) && + !folio_test_mlocked(folio); + rcu_read_unlock(); + return ret; +} + static inline bool page_evictable(struct page *page) { bool ret; From patchwork Thu Jul 15 03:36:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A525C07E96 for ; Thu, 15 Jul 2021 04:44:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE1A36128D for ; Thu, 15 Jul 2021 04:44:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE1A36128D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E9E28D0058; Thu, 15 Jul 2021 00:44:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C0498D004B; Thu, 15 Jul 2021 00:44:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 188A48D0058; Thu, 15 Jul 2021 00:44:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id E06D98D004B for ; Thu, 15 Jul 2021 00:44:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C91A7183F798D for ; Thu, 15 Jul 2021 04:44:31 +0000 (UTC) X-FDA: 78363581142.19.D22FC9C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 91668B0001A8 for ; Thu, 15 Jul 2021 04:44:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8eTHk8glw+F+ekL5Cv+Dll9+ELMBL3Xwu0ZsSBcDxyw=; b=XFayHgv6hXG/3Q/T74kaITkNiC grZ4R4wACJx5+dNn8E5Wl/Cvb6MJxkSOuFOFSzRQ/tF3hck7QK7LXNWoyzJ6cRdSMgir78+w2Snes uwjLVqhSo3wG+SkSkgW3BdkRx23YlXNeg/hdl97ccfkAiXvP0OXn7MUl6xT5MU7VaAkdO9x4xNR7O xF6zRCAOd/DeiYYVW78qbdRr+ImSCk8T53/JdZMMDbo1cV+Q+BJhCrAdV7elzzy8pB2WwSOGTRqhj dOdg0tXavHzTllRPWHAWvXSyfMHhioC8hZk5Q8TOOP58yaMp4myZCmfA1ekS5tlokSBTldZnHMBQP Q6jqTL3g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tDB-002yKj-Sa; Thu, 15 Jul 2021 04:43:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 082/138] mm/lru: Convert __pagevec_lru_add_fn to take a folio Date: Thu, 15 Jul 2021 04:36:08 +0100 Message-Id: <20210715033704.692967-83-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XFayHgv6; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: ebkccx44bjom4sjkimnnyy34fe1acqk5 X-Rspamd-Queue-Id: 91668B0001A8 X-Rspamd-Server: rspam01 X-HE-Tag: 1626324271-854163 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves five calls to compound_head(), totalling 60 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/trace/events/pagemap.h | 32 ++++++++++++++++---------------- mm/swap.c | 34 +++++++++++++++++----------------- 2 files changed, 33 insertions(+), 33 deletions(-) diff --git a/include/trace/events/pagemap.h b/include/trace/events/pagemap.h index 1fd0185d66e8..171524d3526d 100644 --- a/include/trace/events/pagemap.h +++ b/include/trace/events/pagemap.h @@ -16,38 +16,38 @@ #define PAGEMAP_MAPPEDDISK 0x0020u #define PAGEMAP_BUFFERS 0x0040u -#define trace_pagemap_flags(page) ( \ - (PageAnon(page) ? PAGEMAP_ANONYMOUS : PAGEMAP_FILE) | \ - (page_mapped(page) ? PAGEMAP_MAPPED : 0) | \ - (PageSwapCache(page) ? PAGEMAP_SWAPCACHE : 0) | \ - (PageSwapBacked(page) ? PAGEMAP_SWAPBACKED : 0) | \ - (PageMappedToDisk(page) ? PAGEMAP_MAPPEDDISK : 0) | \ - (page_has_private(page) ? PAGEMAP_BUFFERS : 0) \ +#define trace_pagemap_flags(folio) ( \ + (folio_test_anon(folio) ? PAGEMAP_ANONYMOUS : PAGEMAP_FILE) | \ + (folio_mapped(folio) ? PAGEMAP_MAPPED : 0) | \ + (folio_test_swapcache(folio) ? PAGEMAP_SWAPCACHE : 0) | \ + (folio_test_swapbacked(folio) ? PAGEMAP_SWAPBACKED : 0) | \ + (folio_test_mappedtodisk(folio) ? PAGEMAP_MAPPEDDISK : 0) | \ + (folio_test_private(folio) ? PAGEMAP_BUFFERS : 0) \ ) TRACE_EVENT(mm_lru_insertion, - TP_PROTO(struct page *page), + TP_PROTO(struct folio *folio), - TP_ARGS(page), + TP_ARGS(folio), TP_STRUCT__entry( - __field(struct page *, page ) + __field(struct folio *, folio ) __field(unsigned long, pfn ) __field(enum lru_list, lru ) __field(unsigned long, flags ) ), TP_fast_assign( - __entry->page = page; - __entry->pfn = page_to_pfn(page); - __entry->lru = folio_lru_list(page_folio(page)); - __entry->flags = trace_pagemap_flags(page); + __entry->folio = folio; + __entry->pfn = folio_pfn(folio); + __entry->lru = folio_lru_list(folio); + __entry->flags = trace_pagemap_flags(folio); ), /* Flag format is based on page-types.c formatting for pagemap */ - TP_printk("page=%p pfn=0x%lx lru=%d flags=%s%s%s%s%s%s", - __entry->page, + TP_printk("folio=%p pfn=0x%lx lru=%d flags=%s%s%s%s%s%s", + __entry->folio, __entry->pfn, __entry->lru, __entry->flags & PAGEMAP_MAPPED ? "M" : " ", diff --git a/mm/swap.c b/mm/swap.c index 6e80f30d2e5e..89d4471ceb80 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1001,17 +1001,18 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) +static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) { - int was_unevictable = TestClearPageUnevictable(page); - int nr_pages = thp_nr_pages(page); + int was_unevictable = folio_test_clear_unevictable(folio); + int nr_pages = folio_nr_pages(folio); - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); /* - * Page becomes evictable in two ways: + * Folio becomes evictable in two ways: * 1) Within LRU lock [munlock_vma_page() and __munlock_pagevec()]. - * 2) Before acquiring LRU lock to put the page to correct LRU and then + * 2) Before acquiring LRU lock to put the folio on the correct LRU + * and then * a) do PageLRU check with lock [check_move_unevictable_pages] * b) do PageLRU check before lock [clear_page_mlock] * @@ -1020,10 +1021,10 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) * * #0: __pagevec_lru_add_fn #1: clear_page_mlock * - * SetPageLRU() TestClearPageMlocked() + * folio_set_lru() folio_test_clear_mlocked() * smp_mb() // explicit ordering // above provides strict * // ordering - * PageMlocked() PageLRU() + * folio_test_mlocked() folio_test_lru() * * * if '#1' does not observe setting of PG_lru by '#0' and fails @@ -1034,21 +1035,21 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) * looking at the same page) and the evictable page will be stranded * in an unevictable LRU. */ - SetPageLRU(page); + folio_set_lru(folio); smp_mb__after_atomic(); - if (page_evictable(page)) { + if (folio_evictable(folio)) { if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { - ClearPageActive(page); - SetPageUnevictable(page); + folio_clear_active(folio); + folio_set_unevictable(folio); if (!was_unevictable) __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } - add_page_to_lru_list(page, lruvec); - trace_mm_lru_insertion(page); + lruvec_add_folio(lruvec, folio); + trace_mm_lru_insertion(folio); } /* @@ -1062,11 +1063,10 @@ void __pagevec_lru_add(struct pagevec *pvec) unsigned long flags = 0; for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; - struct folio *folio = page_folio(page); + struct folio *folio = page_folio(pvec->pages[i]); lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); + __pagevec_lru_add_fn(folio, lruvec); } if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags); From patchwork Thu Jul 15 03:36:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 924AAC07E96 for ; Thu, 15 Jul 2021 04:44:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37211611AB for ; Thu, 15 Jul 2021 04:44:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37211611AB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8DCA88D0059; Thu, 15 Jul 2021 00:44:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88BD28D004B; Thu, 15 Jul 2021 00:44:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72D068D0059; Thu, 15 Jul 2021 00:44:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 50BE28D004B for ; Thu, 15 Jul 2021 00:44:51 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4906C231A8 for ; Thu, 15 Jul 2021 04:44:50 +0000 (UTC) X-FDA: 78363581940.32.C8E02A2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 0683AF000095 for ; Thu, 15 Jul 2021 04:44:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mn7q/52I4w3t9ii2jzCwCXbdd/SQ2Ju45FWtqep1dhc=; b=lXAsWjCocKB9Q5BX+yV68fOS3y KABL3fkYPal0NMHPWum8euUfcMt5FdZCVSx1gGVhxMlHcO2yIAWIiphPSJM3wMnQC10y8Gl46mJFk C/5Pyxx9Ap1Dalt+tErEvXQvsM6ZdL3EgSsIuR2P8ugJu1Ko16MJgUYxjvLIXgY3qGQAkNALnRffu LYW0FsGsN4uREMTK9cbZ/CcbmyFjvOgna8Cg4nKJjlLktwokx1qYOWP6VMHMr45ab0Vwdug50Hqa5 rgIxdqk/vPt009VF+LSPcgihUlUauDsIRzXQZhvcF7n++pWmSmh+bv/VnoaLXvkwMNM35nNBi05hF X1GynDNw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tDi-002yO4-AU; Thu, 15 Jul 2021 04:43:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 083/138] mm/lru: Add folio_add_lru() Date: Thu, 15 Jul 2021 04:36:09 +0100 Message-Id: <20210715033704.692967-84-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lXAsWjCo; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: cp38iyrsii7gyfj714dgmi1sme6ctfuu X-Rspamd-Queue-Id: 0683AF000095 X-HE-Tag: 1626324289-203866 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement lru_cache_add() as a wrapper around folio_add_lru(). Saves 159 bytes of kernel text due to removing calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/swap.h | 1 + mm/folio-compat.c | 6 ++++++ mm/swap.c | 22 +++++++++++----------- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5e01675af7ab..81801ba78b1e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -351,6 +351,7 @@ extern unsigned long nr_free_buffer_pages(void); extern void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages); extern void lru_note_cost_folio(struct folio *); +extern void folio_add_lru(struct folio *); extern void lru_cache_add(struct page *); void mark_page_accessed(struct page *); void folio_mark_accessed(struct folio *); diff --git a/mm/folio-compat.c b/mm/folio-compat.c index c1e01bc36d32..6de3cd78a4ae 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -102,3 +102,9 @@ bool redirty_page_for_writepage(struct writeback_control *wbc, return folio_redirty_for_writepage(wbc, page_folio(page)); } EXPORT_SYMBOL(redirty_page_for_writepage); + +void lru_cache_add(struct page *page) +{ + folio_add_lru(page_folio(page)); +} +EXPORT_SYMBOL(lru_cache_add); diff --git a/mm/swap.c b/mm/swap.c index 89d4471ceb80..6f382abeccf9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -459,29 +459,29 @@ void folio_mark_accessed(struct folio *folio) EXPORT_SYMBOL(folio_mark_accessed); /** - * lru_cache_add - add a page to a page list - * @page: the page to be added to the LRU. + * folio_add_lru - Add a folio to an LRU list. + * @folio: The folio to be added to the LRU. * - * Queue the page for addition to the LRU via pagevec. The decision on whether + * Queue the folio for addition to the LRU. The decision on whether * to add the page to the [in]active [file|anon] list is deferred until the - * pagevec is drained. This gives a chance for the caller of lru_cache_add() - * have the page added to the active list using mark_page_accessed(). + * pagevec is drained. This gives a chance for the caller of folio_add_lru() + * have the folio added to the active list using folio_mark_accessed(). */ -void lru_cache_add(struct page *page) +void folio_add_lru(struct folio *folio) { struct pagevec *pvec; - VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); - get_page(page); + folio_get(folio); local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.lru_add); - if (pagevec_add_and_need_flush(pvec, page)) + if (pagevec_add_and_need_flush(pvec, &folio->page)) __pagevec_lru_add(pvec); local_unlock(&lru_pvecs.lock); } -EXPORT_SYMBOL(lru_cache_add); +EXPORT_SYMBOL(folio_add_lru); /** * lru_cache_add_inactive_or_unevictable From patchwork Thu Jul 15 03:36:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51D60C1B08C for ; Thu, 15 Jul 2021 04:45:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF6E36128D for ; Thu, 15 Jul 2021 04:45:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF6E36128D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FA9A8D005A; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D0758D004B; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 271148D005A; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 0171A8D004B for ; Thu, 15 Jul 2021 00:45:45 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EDC2A1AA69 for ; Thu, 15 Jul 2021 04:45:44 +0000 (UTC) X-FDA: 78363584208.08.4C3F659 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id A6152900025E for ; Thu, 15 Jul 2021 04:45:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aG1Z62nXu6WKzoz8firz+hfiIW2uMXHCr6KhoOFkmPs=; b=oz14Kbgn0ye+mHHr4JYMPXRXQ/ jMiQv7CeFELvEQTwiWKIVA5/NJvLuHUYf3xoMbrptK2wkLCi4w9HSWiAh+Lk63OccRvLaN4kBQUes ILpveBh9EIhnkVmS66/0WNcMc7TrM+pGzzj7ERCsOtWwAByfiIxLS/a6jEU6PLolsKJWDFVCSGZ++ lTrBWnB7SCxX6yuLgxmety10V9s0Z7a+04TSfqFrWH+0rpzoBk3BGA9/KqDmbYahsGKhuLCp+3St8 fFUk3vGWm2cKY9hdFfz/omF22lB/ynQidIIUmD+tmc0acWE+CGYd8uSDlBUyDnVuglU+6FQAagmpw Vq+63ZDg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tEC-002yQB-Mp; Thu, 15 Jul 2021 04:44:25 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 084/138] mm/page_alloc: Add folio allocation functions Date: Thu, 15 Jul 2021 04:36:10 +0100 Message-Id: <20210715033704.692967-85-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oz14Kbgn; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: n3pmzjsx7qetwqwyt6dja3xn9yskgnoo X-Rspamd-Queue-Id: A6152900025E X-HE-Tag: 1626324344-904389 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __folio_alloc(), __folio_alloc_node() and folio_alloc() functions are mostly for type safety, but they also ensure that the page allocator allocates a compound page and initialises the deferred list if the page is large enough to have one. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Vlastimil Babka --- include/linux/gfp.h | 16 ++++++++++++++++ mm/mempolicy.c | 10 ++++++++++ mm/page_alloc.c | 12 ++++++++++++ 3 files changed, 38 insertions(+) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index dc5ff40608ce..3745efd21cf6 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -523,6 +523,8 @@ static inline void arch_alloc_page(struct page *page, int order) { } struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); +struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, + nodemask_t *nodemask); unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, @@ -564,6 +566,15 @@ __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) return __alloc_pages(gfp_mask, order, nid, NULL); } +static inline +struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) +{ + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); + VM_WARN_ON((gfp & __GFP_THISNODE) && !node_online(nid)); + + return __folio_alloc(gfp, order, nid, NULL); +} + /* * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE, * prefer the current CPU's closest node. Otherwise node must be valid and @@ -580,6 +591,7 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); +struct folio *folio_alloc(gfp_t gfp, unsigned order); extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, struct vm_area_struct *vma, unsigned long addr, int node, bool hugepage); @@ -590,6 +602,10 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) { return alloc_pages_node(numa_node_id(), gfp_mask, order); } +static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) +{ + return __folio_alloc_node(gfp, order, numa_node_id()); +} #define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ alloc_pages(gfp_mask, order) #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e32360e90274..95d0cf05f7ca 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2249,6 +2249,16 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) } EXPORT_SYMBOL(alloc_pages); +struct folio *folio_alloc(gfp_t gfp, unsigned order) +{ + struct page *page = alloc_pages(gfp | __GFP_COMP, order); + + if (page && order > 1) + prep_transhuge_page(page); + return (struct folio *)page; +} +EXPORT_SYMBOL(folio_alloc); + int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) { struct mempolicy *pol = mpol_dup(vma_policy(src)); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d72a0d9d4184..d03145671934 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5399,6 +5399,18 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } EXPORT_SYMBOL(__alloc_pages); +struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, + nodemask_t *nodemask) +{ + struct page *page = __alloc_pages(gfp | __GFP_COMP, order, + preferred_nid, nodemask); + + if (page && order > 1) + prep_transhuge_page(page); + return (struct folio *)page; +} +EXPORT_SYMBOL(__folio_alloc); + /* * Common helper functions. Never use with __GFP_HIGHMEM because the returned * address cannot represent highmem pages. Use alloc_pages and then kmap if From patchwork Thu Jul 15 03:36:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B050C07E96 for ; Thu, 15 Jul 2021 04:46:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF627611AB for ; Thu, 15 Jul 2021 04:46:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF627611AB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D57A8D005B; Thu, 15 Jul 2021 00:46:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1AC5A8D004B; Thu, 15 Jul 2021 00:46:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04D468D005B; Thu, 15 Jul 2021 00:46:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id CD08E8D004B for ; Thu, 15 Jul 2021 00:46:38 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A6AB220BFC for ; Thu, 15 Jul 2021 04:46:37 +0000 (UTC) X-FDA: 78363586434.29.CE104CA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 550DFD000494 for ; Thu, 15 Jul 2021 04:46:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/RBw1G6aqidiNyQkusPUaUFes9i2dAFqf9U2SC2zdc4=; b=F/mN7S9h0a3nbhGmFex9HO/ZjL sqZMUCVOeTK6ZftP2y96fjHYg9+A+wuLhE9/huZDycuvDaFH040fL2YFCjg8hZOFCS4g5FLfLHoyQ RQVXU1E2GM5IsAgmP2uGnHLTHZawWJ7kywN0kF7Yc5MSNfRoTjOEqH1NYOg5Dgtz030UESPDQsZEq oofZF5e/aWD7ZPX0I/T3zthxcIW6JpxzvquL280XMYDTJRLAlJAGhE+XUc6gLdEi+jasa+b/i+gFL V8+pn6DKGgbHD2oiF48UkT+MixBat4W1liRjZy5pN8JKnPNsSRRMgCzdLpgm3NMRhTOld9qi/b1TP vrXqwcPA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tFF-002ySx-Np; Thu, 15 Jul 2021 04:45:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 085/138] mm/filemap: Add filemap_alloc_folio Date: Thu, 15 Jul 2021 04:36:11 +0100 Message-Id: <20210715033704.692967-86-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="F/mN7S9h"; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: cqmoa5zirpwrsa1hi6p4bf65y91hra1u X-Rspamd-Queue-Id: 550DFD000494 X-Rspamd-Server: rspam01 X-HE-Tag: 1626324397-828687 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement __page_cache_alloc as a wrapper around filemap_alloc_folio to allow filesystems to be converted at our leisure. Increases kernel text size by 133 bytes, mostly in cachefiles_read_backing_file(). pagecache_get_page() shrinks by 32 bytes, though. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 11 ++++++++--- mm/filemap.c | 14 +++++++------- 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bd4daebaf70e..848acb44ac80 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -262,14 +262,19 @@ static inline void *detach_page_private(struct page *page) } #ifdef CONFIG_NUMA -extern struct page *__page_cache_alloc(gfp_t gfp); +struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); #else -static inline struct page *__page_cache_alloc(gfp_t gfp) +static inline struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) { - return alloc_pages(gfp, 0); + return folio_alloc(gfp, order); } #endif +static inline struct page *__page_cache_alloc(gfp_t gfp) +{ + return &filemap_alloc_folio(gfp, 0)->page; +} + static inline struct page *page_cache_alloc(struct address_space *x) { return __page_cache_alloc(mapping_gfp_mask(x)); diff --git a/mm/filemap.c b/mm/filemap.c index 6bec995e69bd..54989a32d6a8 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -989,24 +989,24 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, EXPORT_SYMBOL_GPL(add_to_page_cache_lru); #ifdef CONFIG_NUMA -struct page *__page_cache_alloc(gfp_t gfp) +struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) { int n; - struct page *page; + struct folio *folio; if (cpuset_do_page_mem_spread()) { unsigned int cpuset_mems_cookie; do { cpuset_mems_cookie = read_mems_allowed_begin(); n = cpuset_mem_spread_node(); - page = __alloc_pages_node(n, gfp, 0); - } while (!page && read_mems_allowed_retry(cpuset_mems_cookie)); + folio = __folio_alloc_node(gfp, order, n); + } while (!folio && read_mems_allowed_retry(cpuset_mems_cookie)); - return page; + return folio; } - return alloc_pages(gfp, 0); + return folio_alloc(gfp, order); } -EXPORT_SYMBOL(__page_cache_alloc); +EXPORT_SYMBOL(filemap_alloc_folio); #endif /* From patchwork Thu Jul 15 03:36:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3147C07E96 for ; Thu, 15 Jul 2021 04:47:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F0A861362 for ; Thu, 15 Jul 2021 04:47:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F0A861362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9EADC8D005C; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C25D8D004B; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 863CF8D005C; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 626E38D004B for ; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 52D60184B4335 for ; Thu, 15 Jul 2021 04:47:14 +0000 (UTC) X-FDA: 78363587988.29.B99BC33 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id E1C8830001A9 for ; Thu, 15 Jul 2021 04:47:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lA2AgVUZmaXQ4H9CBW3dmhy4YUll7WcOLfAqRGxUyrk=; b=CUQyQ/j+Reh9JMKY8uPrZ5nF8j nlKW47tUyRGXysW6STpfYOfhVvc/8KQOcRlJgQdidBnA4vVNgLbq6xAM0eUrHBY0OIjILhe/0bUBB uraG6GjA9ezmO4zCvTLWsPypd7/kv5zNoXXz1SqwbdQ8Eb/JJ6O5gXTiBqvhLK23EkQqloXN7XjdH PAGpVU1DL9t0ZKAdu3kTnrboFyKQ1afAZuDzHjNEQQdIUrBSRsTb6kOim41Rlyjj5fommy/+zHX5T yK66zlxj/yX1fOnSIJyl5Y1VcByP4Y2NY+XpM6DN+MWNmJwCAIf3SvokuWktu7ahI8GJOauEtbq4M pSStG7Jw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tFu-002yWy-4t; Thu, 15 Jul 2021 04:46:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 086/138] mm/filemap: Add filemap_add_folio() Date: Thu, 15 Jul 2021 04:36:12 +0100 Message-Id: <20210715033704.692967-87-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="CUQyQ/j+"; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: a1yjggt3w16zgzp4xu6nqpecs3arfxzo X-Rspamd-Queue-Id: E1C8830001A9 X-HE-Tag: 1626324433-823435 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __add_to_page_cache_locked() into __filemap_add_folio(). Add an assertion to it that (for !hugetlbfs), the folio is naturally aligned within the file. Move the prototype from mm.h to pagemap.h. Convert add_to_page_cache_lru() into filemap_add_folio(). Add a compatibility wrapper for unconverted callers. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/mm.h | 7 ----- include/linux/pagemap.h | 10 ++++-- kernel/bpf/verifier.c | 2 +- mm/filemap.c | 70 ++++++++++++++++++++--------------------- mm/folio-compat.c | 7 +++++ 5 files changed, 50 insertions(+), 46 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4803f2c01367..99f5f736be64 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -213,13 +213,6 @@ int overcommit_kbytes_handler(struct ctl_table *, int, void *, size_t *, loff_t *); int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, loff_t *); -/* - * Any attempt to mark this function as static leads to build failure - * when CONFIG_DEBUG_INFO_BTF is enabled because __add_to_page_cache_locked() - * is referred to by BPF code. This must be visible for error injection. - */ -int __add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp, void **shadowp); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 848acb44ac80..19b2e3bea14c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -877,9 +877,11 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size) } int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); +int filemap_add_folio(struct address_space *mapping, struct folio *folio, + pgoff_t index, gfp_t gfp); extern void delete_from_page_cache(struct page *page); extern void __delete_from_page_cache(struct page *page, void *shadow); void replace_page_cache_page(struct page *old, struct page *new); @@ -904,6 +906,10 @@ static inline int add_to_page_cache(struct page *page, return error; } +/* Must be non-static for BPF error injection */ +int __filemap_add_folio(struct address_space *mapping, struct folio *folio, + pgoff_t index, gfp_t gfp, void **shadowp); + /** * struct readahead_control - Describes a readahead request. * diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 42a4063de7cd..f0a4f8b818e4 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13015,7 +13015,7 @@ BTF_SET_START(btf_non_sleepable_error_inject) /* Three functions below can be called from sleepable and non-sleepable context. * Assume non-sleepable from bpf safety point of view. */ -BTF_ID(func, __add_to_page_cache_locked) +BTF_ID(func, __filemap_add_folio) BTF_ID(func, should_fail_alloc_page) BTF_ID(func, should_failslab) BTF_SET_END(btf_non_sleepable_error_inject) diff --git a/mm/filemap.c b/mm/filemap.c index 54989a32d6a8..4e34383fd894 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -855,26 +855,25 @@ void replace_page_cache_page(struct page *old, struct page *new) } EXPORT_SYMBOL_GPL(replace_page_cache_page); -noinline int __add_to_page_cache_locked(struct page *page, - struct address_space *mapping, - pgoff_t offset, gfp_t gfp, - void **shadowp) +noinline int __filemap_add_folio(struct address_space *mapping, + struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, offset); - int huge = PageHuge(page); + XA_STATE(xas, &mapping->i_pages, index); + int huge = folio_test_hugetlb(folio); int error; bool charged = false; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageSwapBacked(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); mapping_set_update(&xas, mapping); - get_page(page); - page->mapping = mapping; - page->index = offset; + folio_get(folio); + folio->mapping = mapping; + folio->index = index; if (!huge) { - error = mem_cgroup_charge(page_folio(page), NULL, gfp); + error = mem_cgroup_charge(folio, NULL, gfp); + VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); if (error) goto error; charged = true; @@ -886,7 +885,7 @@ noinline int __add_to_page_cache_locked(struct page *page, unsigned int order = xa_get_order(xas.xa, xas.xa_index); void *entry, *old = NULL; - if (order > thp_order(page)) + if (order > folio_order(folio)) xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index), order, gfp); xas_lock_irq(&xas); @@ -903,13 +902,13 @@ noinline int __add_to_page_cache_locked(struct page *page, *shadowp = old; /* entry may have been split before we acquired lock */ order = xa_get_order(xas.xa, xas.xa_index); - if (order > thp_order(page)) { + if (order > folio_order(folio)) { xas_split(&xas, old, order); xas_reset(&xas); } } - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; @@ -917,7 +916,7 @@ noinline int __add_to_page_cache_locked(struct page *page, /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_lruvec_page_state(page, NR_FILE_PAGES); + __lruvec_stat_add_folio(folio, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -925,19 +924,19 @@ noinline int __add_to_page_cache_locked(struct page *page, if (xas_error(&xas)) { error = xas_error(&xas); if (charged) - mem_cgroup_uncharge(page_folio(page)); + mem_cgroup_uncharge(folio); goto error; } - trace_mm_filemap_add_to_page_cache(page); + trace_mm_filemap_add_to_page_cache(&folio->page); return 0; error: - page->mapping = NULL; + folio->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - put_page(page); + folio_put(folio); return error; } -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); +ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); /** * add_to_page_cache_locked - add a locked page to the pagecache @@ -954,39 +953,38 @@ ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); int add_to_page_cache_locked(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + return __filemap_add_folio(mapping, page_folio(page), offset, gfp_mask, NULL); } EXPORT_SYMBOL(add_to_page_cache_locked); -int add_to_page_cache_lru(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) +int filemap_add_folio(struct address_space *mapping, struct folio *folio, + pgoff_t index, gfp_t gfp) { void *shadow = NULL; int ret; - __SetPageLocked(page); - ret = __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, &shadow); + __folio_set_locked(folio); + ret = __filemap_add_folio(mapping, folio, index, gfp, &shadow); if (unlikely(ret)) - __ClearPageLocked(page); + __folio_clear_locked(folio); else { /* - * The page might have been evicted from cache only + * The folio might have been evicted from cache only * recently, in which case it should be activated like - * any other repeatedly accessed page. - * The exception is pages getting rewritten; evicting other + * any other repeatedly accessed folio. + * The exception is folios getting rewritten; evicting other * data from the working set, only to cache data that will * get overwritten with something else, is a waste of memory. */ - WARN_ON_ONCE(PageActive(page)); - if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page_folio(page), shadow); - lru_cache_add(page); + WARN_ON_ONCE(folio_test_active(folio)); + if (!(gfp & __GFP_WRITE) && shadow) + workingset_refault(folio, shadow); + folio_add_lru(folio); } return ret; } -EXPORT_SYMBOL_GPL(add_to_page_cache_lru); +EXPORT_SYMBOL_GPL(filemap_add_folio); #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 6de3cd78a4ae..6b19bc4ed6b0 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -108,3 +108,10 @@ void lru_cache_add(struct page *page) folio_add_lru(page_folio(page)); } EXPORT_SYMBOL(lru_cache_add); + +int add_to_page_cache_lru(struct page *page, struct address_space *mapping, + pgoff_t index, gfp_t gfp) +{ + return filemap_add_folio(mapping, page_folio(page), index, gfp); +} +EXPORT_SYMBOL(add_to_page_cache_lru); From patchwork Thu Jul 15 03:36:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7000AC07E96 for ; Thu, 15 Jul 2021 04:47:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2239E61360 for ; Thu, 15 Jul 2021 04:47:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2239E61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B5948D005D; Thu, 15 Jul 2021 00:47:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 665F38D004B; Thu, 15 Jul 2021 00:47:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52DF78D005D; Thu, 15 Jul 2021 00:47:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 2FD928D004B for ; Thu, 15 Jul 2021 00:47:54 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 17B57184B4336 for ; Thu, 15 Jul 2021 04:47:53 +0000 (UTC) X-FDA: 78363589626.35.BB78F0E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id C2CB89000702 for ; Thu, 15 Jul 2021 04:47:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=s4e8Kh2qWqpIIoiU7QzGdSi1p21qS0jMzL9s/9ihZxo=; b=BRDZ5j6Da8EHAij1YI9iH7W4bY nlylSxN3DJeOKKUL8gdufs+7uKHb3dGVCxEha/8hlarEKk/3FGz3kn5wxQpwjM5Rb80CGloZa7zW5 oqBgPe8Jmqf8KRJOomhAwLIG3mLcYZ54e+7//Es5PK4T7Cf1+6EMUcm6gcghXKlb/j+Jl44fl7HsJ 0olxlTwHC5zr4uk8BjJM3Eq4dyh2W9oK4zS0UiAJVu1dS72Sd7swwdVqF1t8FT4EOZHBmXa6l62xL E6t1Jq7pUdkVIgLkJEdWv+PTZnMC0Vy1cvQN17mjeTkU9pf35qc46mP90pPjtnahppQ+/7aiRQeiF 5GW2voAg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tGh-002yae-SG; Thu, 15 Jul 2021 04:46:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 087/138] mm/filemap: Convert mapping_get_entry to return a folio Date: Thu, 15 Jul 2021 04:36:13 +0100 Message-Id: <20210715033704.692967-88-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C2CB89000702 X-Stat-Signature: ghaokbm86ougyxwp4egncinr76fstt37 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BRDZ5j6D; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626324472-751654 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The pagecache only contains folios, so indicate that this is definitely not a tail page. Shrinks mapping_get_entry() by 56 bytes, but grows pagecache_get_page() by 21 bytes as gcc makes slightly different hot/cold code decisions. A net reduction of 35 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: David Howells Acked-by: Vlastimil Babka --- mm/filemap.c | 35 ++++++++++++++--------------------- 1 file changed, 14 insertions(+), 21 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 4e34383fd894..85a457c7b7a7 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1755,49 +1755,42 @@ EXPORT_SYMBOL(page_cache_prev_miss); * @mapping: the address_space to search * @index: The page cache index. * - * Looks up the page cache slot at @mapping & @index. If there is a - * page cache page, the head page is returned with an increased refcount. + * Looks up the page cache entry at @mapping & @index. If it is a folio, + * it is returned with an increased refcount. If it is a shadow entry + * of a previously evicted folio, or a swap entry from shmem/tmpfs, + * it is returned without further action. * - * If the slot holds a shadow entry of a previously evicted page, or a - * swap entry from shmem/tmpfs, it is returned. - * - * Return: The head page or shadow entry, %NULL if nothing is found. + * Return: The folio, swap or shadow entry, %NULL if nothing is found. */ -static struct page *mapping_get_entry(struct address_space *mapping, - pgoff_t index) +static void *mapping_get_entry(struct address_space *mapping, pgoff_t index) { XA_STATE(xas, &mapping->i_pages, index); - struct page *page; + struct folio *folio; rcu_read_lock(); repeat: xas_reset(&xas); - page = xas_load(&xas); - if (xas_retry(&xas, page)) + folio = xas_load(&xas); + if (xas_retry(&xas, folio)) goto repeat; /* * A shadow entry of a recently evicted page, or a swap entry from * shmem/tmpfs. Return it without attempting to raise page count. */ - if (!page || xa_is_value(page)) + if (!folio || xa_is_value(folio)) goto out; - if (!page_cache_get_speculative(page)) + if (!folio_try_get_rcu(folio)) goto repeat; - /* - * Has the page moved or been split? - * This is part of the lockless pagecache protocol. See - * include/linux/pagemap.h for details. - */ - if (unlikely(page != xas_reload(&xas))) { - put_page(page); + if (unlikely(folio != xas_reload(&xas))) { + folio_put(folio); goto repeat; } out: rcu_read_unlock(); - return page; + return folio; } /** From patchwork Thu Jul 15 03:36:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC387C07E96 for ; Thu, 15 Jul 2021 04:48:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4E79B61360 for ; Thu, 15 Jul 2021 04:48:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E79B61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A107E8D005E; Thu, 15 Jul 2021 00:48:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E72F8D004B; Thu, 15 Jul 2021 00:48:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 888518D005E; Thu, 15 Jul 2021 00:48:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 664108D004B for ; Thu, 15 Jul 2021 00:48:33 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 52EDA181AEF39 for ; Thu, 15 Jul 2021 04:48:32 +0000 (UTC) X-FDA: 78363591264.18.F0D6F1F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id E2E6630001A6 for ; Thu, 15 Jul 2021 04:48:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rRzHEXc1ZEZ/CLzyENwbizlQTGvzKdjf6R/+nj4ll8w=; b=H6hl3FK/ZV5OhoWvNKuQbKterv 4uG6RbwWcWxFq5z9mayBDgl156oP9oQPKgQM2Rfk5ZY+u8CimcRhhajXCb97/8q13JmSEp5DOC4wJ TZOqYJnyAmRDslaDr5rzHVgztoVcZCXh9DMZ+nmO2ThC1ox1aoTGuXor6rDCRNQfqf/qTgU9laxD9 lx0jf/IIfthbtkp1HkJV/Yk3rVwx/Qxn5dxeQsXrcM5pQC6LqLAmr61IbAVoWpw6SKMMnZrJlbRPl w7iSpPOwSav6aN8SaeUf5b4NoR5KgSXMvNiSt7XLXf4qSam96Dmtrh7SRUUWraxSfC4ene8uJV9fe 6aHpWTDA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tHA-002yhE-3Y; Thu, 15 Jul 2021 04:47:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 088/138] mm/filemap: Add filemap_get_folio Date: Thu, 15 Jul 2021 04:36:14 +0100 Message-Id: <20210715033704.692967-89-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="H6hl3FK/"; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 33szm356zoedbna4jfe9ikbjj7yhsm6e X-Rspamd-Queue-Id: E2E6630001A6 X-HE-Tag: 1626324511-694245 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: filemap_get_folio() is a replacement for find_get_page(). Turn pagecache_get_page() into a wrapper around __filemap_get_folio(). Remove find_lock_head() as this use case is now covered by filemap_get_folio(). Reduces overall kernel size by 209 bytes. __filemap_get_folio() is 316 bytes shorter than pagecache_get_page() was, but the new pagecache_get_page() is 99 bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 41 +++++++++--------- mm/filemap.c | 92 ++++++++++++++++++++--------------------- mm/folio-compat.c | 12 ++++++ 3 files changed, 76 insertions(+), 69 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 19b2e3bea14c..b24933eced18 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -302,8 +302,26 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping, #define FGP_HEAD 0x00000080 #define FGP_ENTRY 0x00000100 -struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, - int fgp_flags, gfp_t cache_gfp_mask); +struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, + int fgp_flags, gfp_t gfp); +struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, + int fgp_flags, gfp_t gfp); + +/** + * filemap_get_folio - Find and get a folio. + * @mapping: The address_space to search. + * @index: The page index. + * + * Looks up the page cache entry at @mapping & @index. If a folio is + * present, it is returned with an increased refcount. + * + * Otherwise, %NULL is returned. + */ +static inline struct folio *filemap_get_folio(struct address_space *mapping, + pgoff_t index) +{ + return __filemap_get_folio(mapping, index, 0, 0); +} /** * find_get_page - find and get a page reference @@ -346,25 +364,6 @@ static inline struct page *find_lock_page(struct address_space *mapping, return pagecache_get_page(mapping, index, FGP_LOCK, 0); } -/** - * find_lock_head - Locate, pin and lock a pagecache page. - * @mapping: The address_space to search. - * @index: The page index. - * - * Looks up the page cache entry at @mapping & @index. If there is a - * page cache page, its head page is returned locked and with an increased - * refcount. - * - * Context: May sleep. - * Return: A struct page which is !PageTail, or %NULL if there is no page - * in the cache for this index. - */ -static inline struct page *find_lock_head(struct address_space *mapping, - pgoff_t index) -{ - return pagecache_get_page(mapping, index, FGP_LOCK | FGP_HEAD, 0); -} - /** * find_or_create_page - locate or add a pagecache page * @mapping: the page's address_space diff --git a/mm/filemap.c b/mm/filemap.c index 85a457c7b7a7..061e285aae21 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1794,93 +1794,89 @@ static void *mapping_get_entry(struct address_space *mapping, pgoff_t index) } /** - * pagecache_get_page - Find and get a reference to a page. + * __filemap_get_folio - Find and get a reference to a folio. * @mapping: The address_space to search. * @index: The page index. - * @fgp_flags: %FGP flags modify how the page is returned. - * @gfp_mask: Memory allocation flags to use if %FGP_CREAT is specified. + * @fgp_flags: %FGP flags modify how the folio is returned. + * @gfp: Memory allocation flags to use if %FGP_CREAT is specified. * * Looks up the page cache entry at @mapping & @index. * * @fgp_flags can be zero or more of these flags: * - * * %FGP_ACCESSED - The page will be marked accessed. - * * %FGP_LOCK - The page is returned locked. - * * %FGP_HEAD - If the page is present and a THP, return the head page - * rather than the exact page specified by the index. + * * %FGP_ACCESSED - The folio will be marked accessed. + * * %FGP_LOCK - The folio is returned locked. * * %FGP_ENTRY - If there is a shadow / swap / DAX entry, return it - * instead of allocating a new page to replace it. + * instead of allocating a new folio to replace it. * * %FGP_CREAT - If no page is present then a new page is allocated using - * @gfp_mask and added to the page cache and the VM's LRU list. + * @gfp and added to the page cache and the VM's LRU list. * The page is returned locked and with an increased refcount. * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the * page is already in cache. If the page was allocated, unlock it before * returning so the caller can do the same dance. - * * %FGP_WRITE - The page will be written - * * %FGP_NOFS - __GFP_FS will get cleared in gfp mask - * * %FGP_NOWAIT - Don't get blocked by page lock + * * %FGP_WRITE - The page will be written to by the caller. + * * %FGP_NOFS - __GFP_FS will get cleared in gfp. + * * %FGP_NOWAIT - Don't get blocked by page lock. * * If %FGP_LOCK or %FGP_CREAT are specified then the function may sleep even * if the %GFP flags specified for %FGP_CREAT are atomic. * * If there is a page cache page, it is returned with an increased refcount. * - * Return: The found page or %NULL otherwise. + * Return: The found folio or %NULL otherwise. */ -struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, - int fgp_flags, gfp_t gfp_mask) +struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, + int fgp_flags, gfp_t gfp) { - struct page *page; + struct folio *folio; repeat: - page = mapping_get_entry(mapping, index); - if (xa_is_value(page)) { + folio = mapping_get_entry(mapping, index); + if (xa_is_value(folio)) { if (fgp_flags & FGP_ENTRY) - return page; - page = NULL; + return folio; + folio = NULL; } - if (!page) + if (!folio) goto no_page; if (fgp_flags & FGP_LOCK) { if (fgp_flags & FGP_NOWAIT) { - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); return NULL; } } else { - lock_page(page); + folio_lock(folio); } /* Has the page been truncated? */ - if (unlikely(page->mapping != mapping)) { - unlock_page(page); - put_page(page); + if (unlikely(folio->mapping != mapping)) { + folio_unlock(folio); + folio_put(folio); goto repeat; } - VM_BUG_ON_PAGE(!thp_contains(page, index), page); + VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); } if (fgp_flags & FGP_ACCESSED) - mark_page_accessed(page); + folio_mark_accessed(folio); else if (fgp_flags & FGP_WRITE) { /* Clear idle flag for buffer write */ - if (page_is_idle(page)) - clear_page_idle(page); + if (folio_test_idle(folio)) + folio_clear_idle(folio); } - if (!(fgp_flags & FGP_HEAD)) - page = find_subpage(page, index); no_page: - if (!page && (fgp_flags & FGP_CREAT)) { + if (!folio && (fgp_flags & FGP_CREAT)) { int err; if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping)) - gfp_mask |= __GFP_WRITE; + gfp |= __GFP_WRITE; if (fgp_flags & FGP_NOFS) - gfp_mask &= ~__GFP_FS; + gfp &= ~__GFP_FS; - page = __page_cache_alloc(gfp_mask); - if (!page) + folio = filemap_alloc_folio(gfp, 0); + if (!folio) return NULL; if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP)))) @@ -1888,27 +1884,27 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, /* Init accessed so avoid atomic mark_page_accessed later */ if (fgp_flags & FGP_ACCESSED) - __SetPageReferenced(page); + __folio_set_referenced(folio); - err = add_to_page_cache_lru(page, mapping, index, gfp_mask); + err = filemap_add_folio(mapping, folio, index, gfp); if (unlikely(err)) { - put_page(page); - page = NULL; + folio_put(folio); + folio = NULL; if (err == -EEXIST) goto repeat; } /* - * add_to_page_cache_lru locks the page, and for mmap we expect - * an unlocked page. + * filemap_add_folio locks the page, and for mmap + * we expect an unlocked page. */ - if (page && (fgp_flags & FGP_FOR_MMAP)) - unlock_page(page); + if (folio && (fgp_flags & FGP_FOR_MMAP)) + folio_unlock(folio); } - return page; + return folio; } -EXPORT_SYMBOL(pagecache_get_page); +EXPORT_SYMBOL(__filemap_get_folio); static inline struct page *find_get_entry(struct xa_state *xas, pgoff_t max, xa_mark_t mark) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 6b19bc4ed6b0..e833e680e944 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -115,3 +115,15 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, return filemap_add_folio(mapping, page_folio(page), index, gfp); } EXPORT_SYMBOL(add_to_page_cache_lru); + +struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, + int fgp_flags, gfp_t gfp) +{ + struct folio *folio; + + folio = __filemap_get_folio(mapping, index, fgp_flags, gfp); + if ((fgp_flags & FGP_HEAD) || !folio || xa_is_value(folio)) + return &folio->page; + return folio_file_page(folio, index); +} +EXPORT_SYMBOL(pagecache_get_page); From patchwork Thu Jul 15 03:36:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 054F7C07E96 for ; Thu, 15 Jul 2021 04:48:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B3DDD611AB for ; Thu, 15 Jul 2021 04:48:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3DDD611AB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E81C8D005F; Thu, 15 Jul 2021 00:48:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 098F28D004B; Thu, 15 Jul 2021 00:48:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA2878D005F; Thu, 15 Jul 2021 00:48:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0158.hostedemail.com [216.40.44.158]) by kanga.kvack.org (Postfix) with ESMTP id C772E8D004B for ; Thu, 15 Jul 2021 00:48:40 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BF68B181A3481 for ; Thu, 15 Jul 2021 04:48:39 +0000 (UTC) X-FDA: 78363591558.16.4E43F3A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 61906B0000AA for ; Thu, 15 Jul 2021 04:48:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VA3slxY04uK543T+rs/K2RUbVRf5sutF+JwHtsd+vBA=; b=RmK0pPc6HxWCJqWZjRpYZQ3tSX 929sZJPwoZzxcfJdhqnfjQ1P5Rol4iSpqgEnAEg7fXzRwopMwfSbzx0nmIGtUvFWuolX823UArugF MPa9nnWdx1razD8dEoeFlp+wFpjdoTH/rgFpUAZ3uVQ+bxnoQOyvca1qF7+SD6d0T5xz6vg00gCXJ VAFjZRxjfpfqNbLnMgViDbANt31fajmk9LTEMcby2yWAEfjA0ktKxmp6QYV4EL8xdV4HHQTvDpmte 27yin/8ggr8O1D1Kw/FYAZm2SOnkIpzccJpci5WDh9QJVLe2I6Ry6Qh0RL3fgA3ET6SjfygbG8aXu EAC5ZKVA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tHv-002yjh-Bf; Thu, 15 Jul 2021 04:48:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 089/138] mm/filemap: Add FGP_STABLE Date: Thu, 15 Jul 2021 04:36:15 +0100 Message-Id: <20210715033704.692967-90-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 61906B0000AA X-Stat-Signature: odiek1gsfcezm6ejxrmzmaqx1m7ywm5r Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RmK0pPc6; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626324519-312949 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow filemap_get_folio() to wait for writeback to complete (if the filesystem wants that behaviour). This is the folio equivalent of grab_cache_page_write_begin(), which is moved into the folio-compat file as a reminder to migrate all the code using it. This paves the way for getting rid of AOP_FLAG_NOFS once grab_cache_page_write_begin() is removed. Kernel grows by 11 bytes. filemap_get_folio() grows by 33 bytes but grab_cache_page_write_begin() shrinks by 22 bytes to make up for it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: David Howells Acked-by: Vlastimil Babka --- include/linux/pagemap.h | 1 + mm/filemap.c | 25 +++---------------------- mm/folio-compat.c | 13 +++++++++++++ 3 files changed, 17 insertions(+), 22 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index b24933eced18..83c1a798265f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -301,6 +301,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping, #define FGP_FOR_MMAP 0x00000040 #define FGP_HEAD 0x00000080 #define FGP_ENTRY 0x00000100 +#define FGP_STABLE 0x00000200 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, int fgp_flags, gfp_t gfp); diff --git a/mm/filemap.c b/mm/filemap.c index 061e285aae21..0434c5a55fec 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1817,6 +1817,7 @@ static void *mapping_get_entry(struct address_space *mapping, pgoff_t index) * * %FGP_WRITE - The page will be written to by the caller. * * %FGP_NOFS - __GFP_FS will get cleared in gfp. * * %FGP_NOWAIT - Don't get blocked by page lock. + * * %FGP_STABLE - Wait for the folio to be stable (finished writeback) * * If %FGP_LOCK or %FGP_CREAT are specified then the function may sleep even * if the %GFP flags specified for %FGP_CREAT are atomic. @@ -1867,6 +1868,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, folio_clear_idle(folio); } + if (fgp_flags & FGP_STABLE) + folio_wait_stable(folio); no_page: if (!folio && (fgp_flags & FGP_CREAT)) { int err; @@ -3590,28 +3593,6 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from) } EXPORT_SYMBOL(generic_file_direct_write); -/* - * Find or create a page at the given pagecache position. Return the locked - * page. This function is specifically for buffered writes. - */ -struct page *grab_cache_page_write_begin(struct address_space *mapping, - pgoff_t index, unsigned flags) -{ - struct page *page; - int fgp_flags = FGP_LOCK|FGP_WRITE|FGP_CREAT; - - if (flags & AOP_FLAG_NOFS) - fgp_flags |= FGP_NOFS; - - page = pagecache_get_page(mapping, index, fgp_flags, - mapping_gfp_mask(mapping)); - if (page) - wait_for_stable_page(page); - - return page; -} -EXPORT_SYMBOL(grab_cache_page_write_begin); - ssize_t generic_perform_write(struct file *file, struct iov_iter *i, loff_t pos) { diff --git a/mm/folio-compat.c b/mm/folio-compat.c index e833e680e944..5b6ae1da314e 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -116,6 +116,7 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, } EXPORT_SYMBOL(add_to_page_cache_lru); +noinline struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, int fgp_flags, gfp_t gfp) { @@ -127,3 +128,15 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, return folio_file_page(folio, index); } EXPORT_SYMBOL(pagecache_get_page); + +struct page *grab_cache_page_write_begin(struct address_space *mapping, + pgoff_t index, unsigned flags) +{ + unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; + + if (flags & AOP_FLAG_NOFS) + fgp_flags |= FGP_NOFS; + return pagecache_get_page(mapping, index, fgp_flags, + mapping_gfp_mask(mapping)); +} +EXPORT_SYMBOL(grab_cache_page_write_begin); From patchwork Thu Jul 15 03:36:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B1C6C07E96 for ; Thu, 15 Jul 2021 04:49:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC8366127C for ; Thu, 15 Jul 2021 04:49:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC8366127C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 098AF8D0060; Thu, 15 Jul 2021 00:49:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 048948D004B; Thu, 15 Jul 2021 00:49:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E53248D0060; Thu, 15 Jul 2021 00:49:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id C349D8D004B for ; Thu, 15 Jul 2021 00:49:33 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B4B98235C0 for ; Thu, 15 Jul 2021 04:49:32 +0000 (UTC) X-FDA: 78363593784.23.99A2528 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 5E71BF0000A2 for ; Thu, 15 Jul 2021 04:49:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bVPSf8Zkk6csbsziZBtmmFnz26QKcpfneTyu4NcWG00=; b=MP98r+uaUTbvmm2JId8fjdzTC4 2qNmDz1mcEOLAJIB2JrVexA7t8KC5MtIDjVBlZyqiciFtMfr6W1st2wxwfe9B7K/iDlxvmMULBvoa e5gBNEkSEBZclNRiXhhDRtreYGZ09wu5GJnrNDOOuoOszIsDOUttihW47+OZjLj5beA2cEO4IR6Bx g7IpKRFsTajYH9NMbmoQMkRJw2jWSwTOm7u5qBQOLS61D1J9yk6RnlL36tcNLT7KitqvKVYhXfKs1 VtRQYcN59+iqjnD/gV6IXnmYSLlfhQlnlbhKSw0HqLV4lb+zjq0ln8D2VthPyGLeUrZ71YVYOoqso iK+zp5Aw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tIO-002yla-TH; Thu, 15 Jul 2021 04:48:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 090/138] block: Add bio_add_folio() Date: Thu, 15 Jul 2021 04:36:16 +0100 Message-Id: <20210715033704.692967-91-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MP98r+ua; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: sgsw7fyx1f6rpe53fruybsapaw4sm9gj X-Rspamd-Queue-Id: 5E71BF0000A2 X-Rspamd-Server: rspam01 X-HE-Tag: 1626324572-766414 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a thin wrapper around bio_add_page(). The main advantage here is the documentation that the submitter can expect to see folios in the completion handler, and that stupidly large folios are not supported. It's not currently possible to allocate stupidly large folios, but if it ever becomes possible, this function will fail gracefully instead of doing I/O to the wrong bytes. Signed-off-by: Matthew Wilcox (Oracle) --- block/bio.c | 21 +++++++++++++++++++++ include/linux/bio.h | 3 ++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 1fab762e079b..1b500611d25c 100644 --- a/block/bio.c +++ b/block/bio.c @@ -933,6 +933,27 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +/** + * bio_add_folio - Attempt to add part of a folio to a bio. + * @bio: Bio to add to. + * @folio: Folio to add. + * @len: How many bytes from the folio to add. + * @off: First byte in this folio to add. + * + * Always uses the head page of the folio in the bio. If a submitter + * only uses bio_add_folio(), it can count on never seeing tail pages + * in the completion routine. BIOs do not support folios larger than 2GiB. + * + * Return: The number of bytes from this folio added to the bio. + */ +size_t bio_add_folio(struct bio *bio, struct folio *folio, size_t len, + size_t off) +{ + if (len > UINT_MAX || off > UINT_MAX) + return 0; + return bio_add_page(bio, &folio->page, len, off); +} + void bio_release_pages(struct bio *bio, bool mark_dirty) { struct bvec_iter_all iter_all; diff --git a/include/linux/bio.h b/include/linux/bio.h index 2203b686e1f0..ade93e2de6a1 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -462,7 +462,8 @@ extern void bio_uninit(struct bio *); extern void bio_reset(struct bio *); void bio_chain(struct bio *, struct bio *); -extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); +int bio_add_page(struct bio *, struct page *, unsigned len, unsigned off); +size_t bio_add_folio(struct bio *, struct folio *, size_t len, size_t off); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); int bio_add_zone_append_page(struct bio *bio, struct page *page, From patchwork Thu Jul 15 03:36:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 806CFC07E96 for ; Thu, 15 Jul 2021 04:50:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2C7456127C for ; Thu, 15 Jul 2021 04:50:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C7456127C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8125A8D0061; Thu, 15 Jul 2021 00:50:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E8FB8D004B; Thu, 15 Jul 2021 00:50:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B08D8D0061; Thu, 15 Jul 2021 00:50:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 47DD98D004B for ; Thu, 15 Jul 2021 00:50:06 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3D59B235CC for ; Thu, 15 Jul 2021 04:50:05 +0000 (UTC) X-FDA: 78363595170.10.7119864 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id EF0095012D76 for ; Thu, 15 Jul 2021 04:50:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nR2oa5e93nFAI2YDNXg5bh3xw6tIvBeN/m4O0bgKbkg=; b=Jw7RSMUHmstQ0QgkqASbq5J0yi jjJWrPgUdm4Ls0Qw3rWSdBATQyFfZy1twlES6IgCkYfga9AsDpdum2+4FFRDeSEtrl1/udjOpR+Vs gPKfe/ddO56BwwWwxXpYy/TROoslQgTIwXioHWsCZW2Jk1Jk3L9tFJekJGQxy2UySBg/qdJkhsH8L poHi/vo9fjXxeehKZphi1dS8tQ+gt8to8vSUK/jLqcy9/NVcM+Sl0M3NQvk1nKnplMO/XVtiFUZ7E QZghEvsPdF7yoKCLDlGs5qP0bjk1qxl/tTzCnQq+za+x97/aZVlqFd4lLG1ghbujhEF6zrMZCaYqR 7xZfWZ+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tIj-002yn6-2I; Thu, 15 Jul 2021 04:49:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 091/138] block: Add bio_for_each_folio_all() Date: Thu, 15 Jul 2021 04:36:17 +0100 Message-Id: <20210715033704.692967-92-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Jw7RSMUH; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EF0095012D76 X-Stat-Signature: 3mayo3fu41n4mwf1sqx43wbe7tfj4b5z X-HE-Tag: 1626324604-884123 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow callers to iterate over each folio instead of each page. The bio need not have been constructed using folios originally. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/bio.h | 43 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 42 insertions(+), 1 deletion(-) diff --git a/include/linux/bio.h b/include/linux/bio.h index ade93e2de6a1..d462bbc95c4b 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -189,7 +189,7 @@ static inline void bio_advance_iter_single(const struct bio *bio, */ #define bio_for_each_bvec_all(bvl, bio, i) \ for (i = 0, bvl = bio_first_bvec_all(bio); \ - i < (bio)->bi_vcnt; i++, bvl++) \ + i < (bio)->bi_vcnt; i++, bvl++) #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) @@ -314,6 +314,47 @@ static inline struct bio_vec *bio_last_bvec_all(struct bio *bio) return &bio->bi_io_vec[bio->bi_vcnt - 1]; } +struct folio_iter { + struct folio *folio; + size_t offset; + size_t length; + size_t _seg_count; + int _i; +}; + +static inline +void bio_first_folio(struct folio_iter *fi, struct bio *bio, int i) +{ + struct bio_vec *bvec = bio_first_bvec_all(bio) + i; + + fi->folio = page_folio(bvec->bv_page); + fi->offset = bvec->bv_offset + + PAGE_SIZE * (bvec->bv_page - &fi->folio->page); + fi->_seg_count = bvec->bv_len; + fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); + fi->_i = i; +} + +static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio) +{ + fi->_seg_count -= fi->length; + if (fi->_seg_count) { + fi->folio = folio_next(fi->folio); + fi->offset = 0; + fi->length = min(folio_size(fi->folio), fi->_seg_count); + } else if (fi->_i + 1 < bio->bi_vcnt) { + bio_first_folio(fi, bio, fi->_i + 1); + } else { + fi->folio = NULL; + } +} + +/* + * Iterate over each folio in a bio. + */ +#define bio_for_each_folio_all(fi, bio) \ + for (bio_first_folio(&fi, bio, 0); fi.folio; bio_next_folio(&fi, bio)) + enum bip_flags { BIP_BLOCK_INTEGRITY = 1 << 0, /* block layer owns integrity data */ BIP_MAPPED_INTEGRITY = 1 << 1, /* ref tag has been remapped */ From patchwork Thu Jul 15 03:36:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 583BEC07E96 for ; Thu, 15 Jul 2021 04:51:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0455C61360 for ; Thu, 15 Jul 2021 04:51:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0455C61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 440008D0062; Thu, 15 Jul 2021 00:51:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C8B48D004B; Thu, 15 Jul 2021 00:51:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 269C18D0062; Thu, 15 Jul 2021 00:51:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id EE22C8D004B for ; Thu, 15 Jul 2021 00:51:17 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B9EF618479F16 for ; Thu, 15 Jul 2021 04:51:16 +0000 (UTC) X-FDA: 78363598152.13.1F7CC2E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 7FC94801F243 for ; Thu, 15 Jul 2021 04:51:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2KpmNuKLiK5ZYO19bJdoGPIqxKSeXt3hnU6HI4Rc9NE=; b=nky4SLrO48mT8+vurxROT4XMyi bTjcrohko+HTfCUc2Pd4PatEQ0PyVHNozKBxsUrRe/uU3MvK7IRYP967M0Fr2akHMmTdzDUdw62Is lWLYYewo876UrQRMebptn9FlbmJcU1TRJvyTU5+HOPwU727t44VLKEiT7LuhIDUhRnGctdsZbYrs5 xyTmtdNNfGjakoTo2mAEOPVAuS3ihJZvGa3p0zNXF9bksZnd8K/5Ke5C3wKkJ3LH7d4SRFg17F2BZ Th2nAVDVQEqvf9/Qh25n8xKV3KRqR2h3Z42dAze++ofsBBcQvhwk1P9yAiCyJIAoZn0q5D4uIHs/K HptGTFTA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tJL-002ypl-9Y; Thu, 15 Jul 2021 04:49:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 092/138] iomap: Convert to_iomap_page to take a folio Date: Thu, 15 Jul 2021 04:36:18 +0100 Message-Id: <20210715033704.692967-93-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nky4SLrO; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: kro3tthrhaob4bgjxz4ak8fnsiac8ii9 X-Rspamd-Queue-Id: 7FC94801F243 X-Rspamd-Server: rspam01 X-HE-Tag: 1626324676-262218 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The big comment about only using a head page can go away now that it takes a folio argument. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..cd5c2f24cb7e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -22,8 +22,8 @@ #include "../internal.h" /* - * Structure allocated for each page or THP when block size < page size - * to track sub-page uptodate status and I/O completions. + * Structure allocated for each folio when block size < folio size + * to track sub-folio uptodate status and I/O completions. */ struct iomap_page { atomic_t read_bytes_pending; @@ -32,17 +32,10 @@ struct iomap_page { unsigned long uptodate[]; }; -static inline struct iomap_page *to_iomap_page(struct page *page) +static inline struct iomap_page *to_iomap_page(struct folio *folio) { - /* - * per-block data is stored in the head page. Callers should - * not be dealing with tail pages (and if they are, they can - * call thp_head() first. - */ - VM_BUG_ON_PGFLAGS(PageTail(page), page); - - if (page_has_private(page)) - return (struct iomap_page *)page_private(page); + if (folio_test_private(folio)) + return folio_get_private(folio); return NULL; } @@ -51,7 +44,8 @@ static struct bio_set iomap_ioend_bioset; static struct iomap_page * iomap_page_create(struct inode *inode, struct page *page) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); unsigned int nr_blocks = i_blocks_per_page(inode, page); if (iop || nr_blocks <= 1) @@ -144,7 +138,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, static void iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -173,7 +168,8 @@ static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { struct page *page = bvec->bv_page; - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { ClearPageUptodate(page); @@ -433,7 +429,8 @@ int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned len, first, last; unsigned i; @@ -1011,7 +1008,8 @@ static void iomap_finish_page_writeback(struct inode *inode, struct page *page, int error, unsigned int len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (error) { SetPageError(page); @@ -1304,7 +1302,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct page *page, u64 end_offset) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ From patchwork Thu Jul 15 03:36:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21EC0C07E96 for ; Thu, 15 Jul 2021 04:51:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8B2461360 for ; Thu, 15 Jul 2021 04:51:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C8B2461360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B90B8D0063; Thu, 15 Jul 2021 00:51:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 268E38D004B; Thu, 15 Jul 2021 00:51:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10A698D0063; Thu, 15 Jul 2021 00:51:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id D70DE8D004B for ; Thu, 15 Jul 2021 00:51:44 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CD0C92123C for ; Thu, 15 Jul 2021 04:51:43 +0000 (UTC) X-FDA: 78363599286.08.5FAA4DB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 8F58C192A for ; Thu, 15 Jul 2021 04:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Q6r2jxrfHSNZSwMgkMnKOeji/maSX51+FKKyiv0w1Kw=; b=uRhk2gPU5CW9kYuqZzi7GUJMsO no0aQ36mKokkXJMhcrPBDcWMMxJLuZMDMpPAQeo1GChL5Dxy7es9pw/yi1w7ZaguIo4Y3XEh9clyF pWGmXDwVOmk/y/MEx+d1RD3/8Wb5z8zmY6VCe2iRKvXgwnjV0PgqI3VeKdy6BZnlfByPAQhF2rPUd rYJSAxZ5BR/7dx0nx3x2OLivyXcpeyk/AuXj4OHz0CiWaI9moQ7eEie9DezmAWGXJwI7PRhQAo8eH 7IOYBjD7cB3IM+VUahU+oI/D/eaN/3fi8LC3JXmUv6EoudsCsc7PpTJ5W43PIhoLKSVRfkSBQ5x+U Gr7KymkQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tKD-002ysw-84; Thu, 15 Jul 2021 04:50:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 093/138] iomap: Convert iomap_page_create to take a folio Date: Thu, 15 Jul 2021 04:36:19 +0100 Message-Id: <20210715033704.692967-94-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uRhk2gPU; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8F58C192A X-Stat-Signature: 7isyqteb9sdrsfs4me8yab119gyjwzgz X-HE-Tag: 1626324703-174386 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function already assumed it was being passed a head page, so just formalise that. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index cd5c2f24cb7e..c15a0ac52a32 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -42,11 +42,10 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio) static struct bio_set iomap_ioend_bioset; static struct iomap_page * -iomap_page_create(struct inode *inode, struct page *page) +iomap_page_create(struct inode *inode, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - unsigned int nr_blocks = i_blocks_per_page(inode, page); + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (iop || nr_blocks <= 1) return iop; @@ -54,9 +53,9 @@ iomap_page_create(struct inode *inode, struct page *page) iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) bitmap_fill(iop->uptodate, nr_blocks); - attach_page_private(page, iop); + folio_attach_private(folio, iop); return iop; } @@ -235,7 +234,8 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, { struct iomap_readpage_ctx *ctx = data; struct page *page = ctx->cur_page; - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); bool same_page = false, is_contig = false; loff_t orig_pos = pos; unsigned poff, plen; @@ -547,7 +547,8 @@ static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, struct page *page, struct iomap *srcmap) { - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -955,6 +956,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { struct page *page = data; + struct folio *folio = page_folio(page); int ret; if (iomap->flags & IOMAP_F_BUFFER_HEAD) { @@ -964,7 +966,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, block_commit_write(page, 0, length); } else { WARN_ON_ONCE(!PageUptodate(page)); - iomap_page_create(inode, page); + iomap_page_create(inode, folio); set_page_dirty(page); } From patchwork Thu Jul 15 03:36:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EEC5C07E96 for ; Thu, 15 Jul 2021 04:52:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 119AD61360 for ; Thu, 15 Jul 2021 04:52:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 119AD61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 63C1F8D0064; Thu, 15 Jul 2021 00:52:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 612208D004B; Thu, 15 Jul 2021 00:52:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B3558D0064; Thu, 15 Jul 2021 00:52:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 25DCF8D004B for ; Thu, 15 Jul 2021 00:52:10 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0E6CC184B40E5 for ; Thu, 15 Jul 2021 04:52:09 +0000 (UTC) X-FDA: 78363600378.24.1E42068 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id C81AB10000B0 for ; Thu, 15 Jul 2021 04:52:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jfV+zEEcnxJ+DhZND5z0SG4OalByev6yzYkbL0tn2hE=; b=Mbiqa3b13unsIR40ldZq0Wv31q GhVF3IC6VWivBgyQYDsumfpAE5Hd4xCz82pXZyt7FzbVIDwT0Xgg7z6VquhuUeaB+tUAFyWgmwtIJ djRYkNPtTMdY2FG+qZZM0+OBIBzwr+z79eTqZS9DgTstH9mDmZD36NqhOoAFcIntmiY13/+IRpHV+ PLOYz7CHqWI0T89VzK5fKSyxwxofetXQwjP1oHy57xUa66OktDxq/M35z9Gi0p/SgDQLSbX7trkzt Nz3cQVG6dHozvyuvPZkeJxzKRkHXcJRXer0LA+ewvBHP2HvivQM3AhtZj2J2+xW3JPeivxYyPv2BI yfE0mY5w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tL1-002yuv-V8; Thu, 15 Jul 2021 04:51:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 094/138] iomap: Convert iomap_page_release to take a folio Date: Thu, 15 Jul 2021 04:36:20 +0100 Message-Id: <20210715033704.692967-95-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Mbiqa3b1; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C81AB10000B0 X-Stat-Signature: g4aduc4x8ygp9zbx7c3we75eiqnerswc X-HE-Tag: 1626324728-575641 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: iomap_page_release() was also assuming that it was being passed a head page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c15a0ac52a32..251ec45426aa 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -59,18 +59,18 @@ iomap_page_create(struct inode *inode, struct folio *folio) return iop; } -static void -iomap_page_release(struct page *page) +static void iomap_page_release(struct folio *folio) { - struct iomap_page *iop = detach_page_private(page); - unsigned int nr_blocks = i_blocks_per_page(page->mapping->host, page); + struct iomap_page *iop = folio_detach_private(folio); + unsigned int nr_blocks = i_blocks_per_folio(folio->mapping->host, + folio); if (!iop) return; WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending)); WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending)); WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) != - PageUptodate(page)); + folio_test_uptodate(folio)); kfree(iop); } @@ -456,6 +456,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); int iomap_releasepage(struct page *page, gfp_t gfp_mask) { + struct folio *folio = page_folio(page); + trace_iomap_releasepage(page->mapping->host, page_offset(page), PAGE_SIZE); @@ -466,7 +468,7 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) */ if (PageDirty(page) || PageWriteback(page)) return 0; - iomap_page_release(page); + iomap_page_release(folio); return 1; } EXPORT_SYMBOL_GPL(iomap_releasepage); @@ -474,6 +476,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { + struct folio *folio = page_folio(page); + trace_iomap_invalidatepage(page->mapping->host, offset, len); /* @@ -483,7 +487,7 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) if (offset == 0 && len == PAGE_SIZE) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); - iomap_page_release(page); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); From patchwork Thu Jul 15 03:36:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BFC3C1B08C for ; Thu, 15 Jul 2021 04:52:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0C77B61362 for ; Thu, 15 Jul 2021 04:52:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0C77B61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C51B8D004B; Thu, 15 Jul 2021 00:52:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 575878D0065; Thu, 15 Jul 2021 00:52:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 422328D004B; Thu, 15 Jul 2021 00:52:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id 1E2868D004B for ; Thu, 15 Jul 2021 00:52:59 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 17218181CBC1B for ; Thu, 15 Jul 2021 04:52:58 +0000 (UTC) X-FDA: 78363602436.32.2BDC46A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id BF7F660019BF for ; Thu, 15 Jul 2021 04:52:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DiCvdub2NyAxi5vdcBAAoKILUi2SsTMXk5wTzRXz3nU=; b=m9own6200ikSVQdzgLeJVvFbXk KLK5jjJLwmy//gHZAkUqKCRKAlllGQYXZ/mUuuaalkPV6wyi213cdVmFWEEvnWi7jeBz5FceaqnKu zYZWQsCeeiQcmaCS3zSkXH7A4kUFTJJrS1LJ3gXUTDCsdLg1h1ogtjStJLts3VXfcZLwU8hNSK3pR N2HdugKThAMJifZIdWxCuiSMV3pwoMWX2KLBCUnY14r/XTFzAajSnlVll8cbQregxqzEGHBf8yqKM zH4ygOVU55t924HiuKrSK1AxkjcLICLQ0+n/cV9jHsd7R9Fcsgk6MkAh7rENjwz+WPpGh09RgBcaD hRuaIIMQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tLa-002yxB-Ff; Thu, 15 Jul 2021 04:51:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 095/138] iomap: Convert iomap_releasepage to use a folio Date: Thu, 15 Jul 2021 04:36:21 +0100 Message-Id: <20210715033704.692967-96-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m9own620; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: bbqt9bcisk3xy7upuq6hdxogszrw568y X-Rspamd-Queue-Id: BF7F660019BF X-HE-Tag: 1626324777-888819 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 251ec45426aa..abb065b50e38 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -458,15 +458,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) { struct folio *folio = page_folio(page); - trace_iomap_releasepage(page->mapping->host, page_offset(page), - PAGE_SIZE); + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), + folio_size(folio)); /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to * ->releasepage() via shrink_active_list(), skip those here. */ - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || folio_test_writeback(folio)) return 0; iomap_page_release(folio); return 1; From patchwork Thu Jul 15 03:36:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E91DC1B08C for ; Thu, 15 Jul 2021 04:54:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94BD460FF4 for ; Thu, 15 Jul 2021 04:54:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94BD460FF4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E74F28D0066; Thu, 15 Jul 2021 00:54:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E4BF38D0065; Thu, 15 Jul 2021 00:54:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D147E8D0066; Thu, 15 Jul 2021 00:54:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id AE5258D0065 for ; Thu, 15 Jul 2021 00:54:27 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 97BB218463F81 for ; Thu, 15 Jul 2021 04:54:26 +0000 (UTC) X-FDA: 78363606132.09.B13620A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 48875400209A for ; Thu, 15 Jul 2021 04:54:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FfJ6ZNoiX29Han3ne7JZIAhgvMC58bjKSAKjKwLA6t8=; b=Et6pXsXerxU9QQvmuJhKjEA5+9 giU78V/mLazdex17mOh8Ya0X7dyndJ/IPtbjGpSLI7o0DRTt+UnMDPg9m02dG1XArhrGk2h9iCGQ5 nyEV93UF6KdFBKoBdQPaYwxhjiTrfSsaRi1htmnxrHrDRJGOBsejeUyMAMH2T6hLCokob+55dqmmr oUst8CaK7+deb3G8OyQOTikXz1R1Bp4dh+Zi8BWy9kLw8ObLwBSd68AD5FlgOXF7LqGNbH3Jx+gvw 15GIELsfQBboDpdvweojqr92KTJ2IzYPnB9o+HMdBJxFS4dV7A4T+Ic2aLpxEpsSszm6KpYSiJY7U T7qxNOGg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tMV-002yzZ-Ee; Thu, 15 Jul 2021 04:52:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 096/138] iomap: Convert iomap_invalidatepage to use a folio Date: Thu, 15 Jul 2021 04:36:22 +0100 Message-Id: <20210715033704.692967-97-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Et6pXsXe; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 9ofztwky4pjr4d6ypqh5176ym9jbppcz X-Rspamd-Queue-Id: 48875400209A X-HE-Tag: 1626324866-421485 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index abb065b50e38..6b41019a51a3 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -478,15 +478,15 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { struct folio *folio = page_folio(page); - trace_iomap_invalidatepage(page->mapping->host, offset, len); + trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* * If we are invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { - WARN_ON_ONCE(PageWriteback(page)); - cancel_dirty_page(page); + if (offset == 0 && len == folio_size(folio)) { + WARN_ON_ONCE(folio_test_writeback(folio)); + folio_cancel_dirty(folio); iomap_page_release(folio); } } From patchwork Thu Jul 15 03:36:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E530CC07E96 for ; Thu, 15 Jul 2021 04:55:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 817F16128B for ; Thu, 15 Jul 2021 04:55:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 817F16128B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D3CF58D0067; Thu, 15 Jul 2021 00:55:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CEB5B8D0065; Thu, 15 Jul 2021 00:55:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB3BC8D0067; Thu, 15 Jul 2021 00:55:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 987F68D0065 for ; Thu, 15 Jul 2021 00:55:21 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 89D6722AA0 for ; Thu, 15 Jul 2021 04:55:20 +0000 (UTC) X-FDA: 78363608400.27.9611D0D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 456E16001A90 for ; Thu, 15 Jul 2021 04:55:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vEW50pAiBefXZGZB4rMFPiTBwMLt7IwLVQwI0eyDnjw=; b=Xtis+O5l25gFqKYqvSBk/4/90b Z4c+n5MI2dGkoPxlRte/QI1ctxxf8AavDBa6IuUBLFaBXht0V3iL71IUTkyspxmyt4akqJ03YY4ic IqJKxU2gaGLZTz6Iu10BcsxGegKUoqOWje8W9N6aApYV9Vnc3UjUx5LozFiIB2CawqfxtfMp9KOt1 U5d8g/1JI5GLOwr4hzIXC5uoA8JzlDcXd0qc8HeyFD6S0lUVswDXkQRQ5Z9TozMBd8RkbyabGTcC0 589Vk+cfipC8YVky+BStDP1d6w7SkHCaas0nlQ9LHeXbdtuc51dlY3gqYUP5fRR/j5rOdp6FkFeek N4rBL6OQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tMz-002z2a-2W; Thu, 15 Jul 2021 04:53:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 097/138] iomap: Pass the iomap_page into iomap_set_range_uptodate Date: Thu, 15 Jul 2021 04:36:23 +0100 Message-Id: <20210715033704.692967-98-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Xtis+O5l; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: hx4w1uyzn7dogb65zmmabbypp1buakjr X-Rspamd-Queue-Id: 456E16001A90 X-HE-Tag: 1626324920-169361 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All but one caller already has the iomap_page, and we can avoid getting it again. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6b41019a51a3..fbe4ebc074ce 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -134,11 +134,9 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void -iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { - struct folio *folio = page_folio(page); - struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -151,14 +149,14 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void -iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { if (PageError(page)) return; - if (page_has_private(page)) - iomap_iop_set_range_uptodate(page, off, len); + if (iop) + iomap_iop_set_range_uptodate(page, iop, off, len); else SetPageUptodate(page); } @@ -174,7 +172,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) ClearPageUptodate(page); SetPageError(page); } else { - iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); + iomap_set_range_uptodate(page, iop, bvec->bv_offset, + bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) @@ -254,7 +253,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap_block_needs_zeroing(inode, iomap, pos)) { zero_user(page, poff, plen); - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); goto done; } @@ -583,7 +582,7 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, if (status) return status; } - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -645,6 +644,8 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct page *page) { + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); flush_dcache_page(page); /* @@ -660,7 +661,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Thu Jul 15 03:36:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6358C47E48 for ; Thu, 15 Jul 2021 04:56:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 65AE161360 for ; Thu, 15 Jul 2021 04:56:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65AE161360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B65218D0068; Thu, 15 Jul 2021 00:56:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEE998D0065; Thu, 15 Jul 2021 00:56:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98F0A8D0068; Thu, 15 Jul 2021 00:56:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 775D88D0065 for ; Thu, 15 Jul 2021 00:56:30 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 55B48824C43A for ; Thu, 15 Jul 2021 04:56:29 +0000 (UTC) X-FDA: 78363611298.26.3C5AF85 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id D6D239000249 for ; Thu, 15 Jul 2021 04:56:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xPc9OujU3mUkmvx4wD1weWi/JYmZYWKpBhkDI8xiDm8=; b=VUELMpx5+4JQ2Tb7CmubOtdTmG FAjp+4ZSSQYbKwXdysOfkHbAsytSuxqiqFWC//oPdlQ2nmacx+m7DlgySXqxDXetFJDBcwZlSHNoC JxAYYs5zbPmgGcvUvOYihY9sV/lA9zI8wJG8UDK0B7O2aRsG/7ExYAEwFQzUHrx6RihNtobG7e7Lt jtQ982VoetdBhOvRI73gd+QhE3xAVilB2dO3u75XxfpzJu7oaLVVu3wOKjZdgnx3rAWZyvxybIM7I b08cHchid5S9lqrMSSl2wsj6Ba0HuWF01JE38ZMfcsNsV5+y/F2yFTMWcJ60Ts3UpGvO2cNQC6r9h 8r3s2W3g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tOD-002z61-5X; Thu, 15 Jul 2021 04:54:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 098/138] iomap: Use folio offsets instead of page offsets Date: Thu, 15 Jul 2021 04:36:24 +0100 Message-Id: <20210715033704.692967-99-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VUELMpx5; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: qbu4wwfamrbppbw68iy71ntei7xtwazs X-Rspamd-Queue-Id: D6D239000249 X-HE-Tag: 1626324988-953565 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass a folio around instead of the page, and make sure the offset is relative to the start of the folio instead of the start of a page. Also use size_t for offset & length to make it clear that these are byte counts, and to support >2GB folios in the future. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 85 ++++++++++++++++++++++-------------------- 1 file changed, 44 insertions(+), 41 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index fbe4ebc074ce..707a96e36651 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -75,18 +75,18 @@ static void iomap_page_release(struct folio *folio) } /* - * Calculate the range inside the page that we actually need to read. + * Calculate the range inside the folio that we actually need to read. */ -static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, - loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) +static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, + loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { + struct iomap_page *iop = to_iomap_page(folio); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + size_t poff = offset_in_folio(folio, *pos); + size_t plen = min_t(loff_t, folio_size(folio) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -124,7 +124,7 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -134,31 +134,31 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void iomap_iop_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; unsigned long flags; spin_lock_irqsave(&iop->uptodate_lock, flags); bitmap_set(iop->uptodate, first, last - first + 1); - if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page))) - SetPageUptodate(page); + if (bitmap_full(iop->uptodate, i_blocks_per_folio(inode, folio))) + folio_mark_uptodate(folio); spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void iomap_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - if (PageError(page)) + if (folio_test_error(folio)) return; if (iop) - iomap_iop_set_range_uptodate(page, iop, off, len); + iomap_iop_set_range_uptodate(folio, iop, off, len); else - SetPageUptodate(page); + folio_mark_uptodate(folio); } static void @@ -169,15 +169,17 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { - ClearPageUptodate(page); - SetPageError(page); + folio_clear_uptodate(folio); + folio_set_error(folio); } else { - iomap_set_range_uptodate(page, iop, bvec->bv_offset, - bvec->bv_len); + size_t off = (page - &folio->page) * PAGE_SIZE + + bvec->bv_offset; + + iomap_set_range_uptodate(folio, iop, off, bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) - unlock_page(page); + folio_unlock(folio); } static void @@ -237,7 +239,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap_page *iop = iomap_page_create(inode, folio); bool same_page = false, is_contig = false; loff_t orig_pos = pos; - unsigned poff, plen; + size_t poff, plen; sector_t sector; if (iomap->type == IOMAP_INLINE) { @@ -246,14 +248,14 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, return PAGE_SIZE; } - /* zero post-eof blocks as the page may be mapped */ - iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen); + /* zero post-eof blocks as the folio may be mapped */ + iomap_adjust_read_range(inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; if (iomap_block_needs_zeroing(inode, iomap, pos)) { - zero_user(page, poff, plen); - iomap_set_range_uptodate(page, iop, poff, plen); + zero_user(&folio->page, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); goto done; } @@ -264,7 +266,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, /* Try to merge into a previous segment if we can */ sector = iomap_sector(iomap, pos); if (ctx->bio && bio_end_sector(ctx->bio) == sector) { - if (__bio_try_merge_page(ctx->bio, page, plen, poff, + if (__bio_try_merge_page(ctx->bio, &folio->page, plen, poff, &same_page)) goto done; is_contig = true; @@ -296,7 +298,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ctx->bio->bi_end_io = iomap_read_end_io; } - bio_add_page(ctx->bio, page, plen, poff); + bio_add_folio(ctx->bio, folio, plen, poff); done: /* * Move the caller beyond our range so that it keeps making progress. @@ -531,9 +533,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) truncate_pagecache_range(inode, max(pos, i_size), pos + len); } -static int -iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, - unsigned plen, struct iomap *iomap) +static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, + size_t poff, size_t plen, struct iomap *iomap) { struct bio_vec bvec; struct bio bio; @@ -542,7 +543,7 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, bio.bi_opf = REQ_OP_READ; bio.bi_iter.bi_sector = iomap_sector(iomap, block_start); bio_set_dev(&bio, iomap->bdev); - __bio_add_page(&bio, page, plen, poff); + bio_add_folio(&bio, folio, plen, poff); return submit_bio_wait(&bio); } @@ -555,14 +556,15 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + size_t from = offset_in_folio(folio, pos), to = from + len; + size_t poff, plen; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; - ClearPageError(page); + folio_clear_error(folio); do { - iomap_adjust_read_range(inode, iop, &block_start, + iomap_adjust_read_range(inode, folio, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; @@ -575,14 +577,15 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, if (iomap_block_needs_zeroing(inode, srcmap, block_start)) { if (WARN_ON_ONCE(flags & IOMAP_WRITE_F_UNSHARE)) return -EIO; - zero_user_segments(page, poff, from, to, poff + plen); + zero_user_segments(&folio->page, poff, from, to, + poff + plen); } else { - int status = iomap_read_page_sync(block_start, page, + int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; } - iomap_set_range_uptodate(page, iop, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -661,7 +664,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); + iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Thu Jul 15 03:36:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31E10C07E96 for ; Thu, 15 Jul 2021 04:57:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D320D61362 for ; Thu, 15 Jul 2021 04:57:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D320D61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19FC08D0069; Thu, 15 Jul 2021 00:57:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 177628D0065; Thu, 15 Jul 2021 00:57:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03E358D0069; Thu, 15 Jul 2021 00:57:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id D70FE8D0065 for ; Thu, 15 Jul 2021 00:57:27 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C330B235C1 for ; Thu, 15 Jul 2021 04:57:26 +0000 (UTC) X-FDA: 78363613692.29.B7798DD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 73B545012D72 for ; Thu, 15 Jul 2021 04:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QBTvMqeqSX2nQbw5I9auUN1LdsgjJjWEqJK0V04dwkc=; b=BJKWWcuZRRfbAtvE1ZUfPea+hm u59QL9OMYsO3sQV4beoZILJrDaGQfr111UsSazjFDzv1c1KyhJzwrLLhblitLe0zGCIPu8BdRyVJl smLwabQHbB25m+rDgV8hqbdV1McugwRAA4P+yo7+D6wLW+GJeQ6fJZt0uJgDXynHMTfSxE4rAXzuB BYf1rPuRmsARWBx2TCInDN9fI65cv+6ERCpF/Aj1NRdOTldeo2NKQNxpR9GiM3cnbcWQFejilb/eT YuO8UW+vB/IWLpHDkvQtcwTB0Za282A4Xq5pkDbrjM/PJUG0MaIArFhZyomt2tVyRNqrorhemhJ5/ Iift8E/g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tPW-002zAP-9l; Thu, 15 Jul 2021 04:56:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 099/138] iomap: Convert bio completions to use folios Date: Thu, 15 Jul 2021 04:36:25 +0100 Message-Id: <20210715033704.692967-100-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BJKWWcuZ; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 73B545012D72 X-Stat-Signature: t35utrb45st1pxawy8wtemjqsaag1s3w X-HE-Tag: 1626325046-476366 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use bio_for_each_folio() to iterate over each folio in the bio instead of iterating over each page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 46 +++++++++++++++++------------------------- 1 file changed, 18 insertions(+), 28 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 707a96e36651..4732298f74e1 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,36 +161,29 @@ static void iomap_set_range_uptodate(struct folio *folio, folio_mark_uptodate(folio); } -static void -iomap_read_page_end_io(struct bio_vec *bvec, int error) +static void iomap_finish_folio_read(struct folio *folio, size_t offset, + size_t len, int error) { - struct page *page = bvec->bv_page; - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { folio_clear_uptodate(folio); folio_set_error(folio); } else { - size_t off = (page - &folio->page) * PAGE_SIZE + - bvec->bv_offset; - - iomap_set_range_uptodate(folio, iop, off, bvec->bv_len); + iomap_set_range_uptodate(folio, iop, offset, len); } - if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) + if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) folio_unlock(folio); } -static void -iomap_read_end_io(struct bio *bio) +static void iomap_read_end_io(struct bio *bio) { int error = blk_status_to_errno(bio->bi_status); - struct bio_vec *bvec; - struct bvec_iter_all iter_all; + struct folio_iter fi; - bio_for_each_segment_all(bvec, bio, iter_all) - iomap_read_page_end_io(bvec, error); + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error); bio_put(bio); } @@ -1014,23 +1007,21 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); -static void -iomap_finish_page_writeback(struct inode *inode, struct page *page, - int error, unsigned int len) +static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, + size_t len, int error) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (error) { - SetPageError(page); + folio_set_error(folio); mapping_set_error(inode->i_mapping, -EIO); } - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) - end_page_writeback(page); + folio_end_writeback(folio); } /* @@ -1049,8 +1040,7 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) bool quiet = bio_flagged(bio, BIO_QUIET); for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct bio_vec *bv; - struct bvec_iter_all iter_all; + struct folio_iter fi; /* * For the last bio, bi_private points to the ioend, so we @@ -1061,10 +1051,10 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) else next = bio->bi_private; - /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bv, bio, iter_all) - iomap_finish_page_writeback(inode, bv->bv_page, error, - bv->bv_len); + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_write(inode, fi.folio, fi.length, + error); bio_put(bio); } /* The ioend has been freed by bio_put() */ From patchwork Thu Jul 15 03:36:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A405AC47E48 for ; Thu, 15 Jul 2021 04:59:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 516DF6128B for ; Thu, 15 Jul 2021 04:59:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 516DF6128B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A50C58D006A; Thu, 15 Jul 2021 00:59:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D95D8D0065; Thu, 15 Jul 2021 00:59:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87B1B8D006A; Thu, 15 Jul 2021 00:59:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 5D8328D0065 for ; Thu, 15 Jul 2021 00:59:03 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 406872287F for ; Thu, 15 Jul 2021 04:59:02 +0000 (UTC) X-FDA: 78363617724.37.E1D5641 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 00B0C7001713 for ; Thu, 15 Jul 2021 04:59:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Q2hA1ivaovhKxfetP87tlrElqQjhxilUL49td311VjA=; b=SOzz0J/yYJJaLQtudBbWiusLQ3 Tg6cmeI5jKoONbKX77mU94awZDQ8suWV22uLDPRJZcj4IProAl8z2f9s15XEzNM7WaT8FITsqRwgN kXSyaPjX+Rx2KXdpxoTEqHJXyXKTk3vbq1jKPqCcvNxA2hF2LdP26vXMzIsm17cS1nWkcpyH2iYJf SPbUxzq3hRgXszJwd4ghjvPWqqU01gDj6KnQu1KXO/0b3a1VEZZtzaD2huOg14l0i8YrsrsPtbjMV j4t8XeyPR/pNYh6UecwPcJeecHdAFZgKoyLTmGxICrhH7NjXRNl95/32frdpk0Mh5TulJd9k8D4oR rnPFF+bw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tQX-002zFh-5M; Thu, 15 Jul 2021 04:57:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 100/138] iomap: Convert readahead and readpage to use a folio Date: Thu, 15 Jul 2021 04:36:26 +0100 Message-Id: <20210715033704.692967-101-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="SOzz0J/y"; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: ta4pz6ioeq64tqb9az96rqg64xqha5fe X-Rspamd-Queue-Id: 00B0C7001713 X-HE-Tag: 1626325141-10605 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Handle folios of arbitrary size instead of working in PAGE_SIZE units. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 61 +++++++++++++++++++++--------------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4732298f74e1..7c702d6c2f64 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -188,8 +188,8 @@ static void iomap_read_end_io(struct bio *bio) } struct iomap_readpage_ctx { - struct page *cur_page; - bool cur_page_in_bio; + struct folio *cur_folio; + bool cur_folio_in_bio; struct bio *bio; struct readahead_control *rac; }; @@ -227,8 +227,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { struct iomap_readpage_ctx *ctx = data; - struct page *page = ctx->cur_page; - struct folio *folio = page_folio(page); + struct folio *folio = ctx->cur_folio; struct iomap_page *iop = iomap_page_create(inode, folio); bool same_page = false, is_contig = false; loff_t orig_pos = pos; @@ -237,7 +236,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap->type == IOMAP_INLINE) { WARN_ON_ONCE(pos); - iomap_read_inline_data(inode, page, iomap); + iomap_read_inline_data(inode, &folio->page, iomap); return PAGE_SIZE; } @@ -252,7 +251,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, goto done; } - ctx->cur_page_in_bio = true; + ctx->cur_folio_in_bio = true; if (iop) atomic_add(plen, &iop->read_bytes_pending); @@ -266,7 +265,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, } if (!is_contig || bio_full(ctx->bio, plen)) { - gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); + gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -305,30 +304,31 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, int iomap_readpage(struct page *page, const struct iomap_ops *ops) { - struct iomap_readpage_ctx ctx = { .cur_page = page }; - struct inode *inode = page->mapping->host; - unsigned poff; + struct folio *folio = page_folio(page); + struct iomap_readpage_ctx ctx = { .cur_folio = folio }; + struct inode *inode = folio->mapping->host; + size_t poff; loff_t ret; + size_t len = folio_size(folio); - trace_iomap_readpage(page->mapping->host, 1); + trace_iomap_readpage(inode, 1); - for (poff = 0; poff < PAGE_SIZE; poff += ret) { - ret = iomap_apply(inode, page_offset(page) + poff, - PAGE_SIZE - poff, 0, ops, &ctx, - iomap_readpage_actor); + for (poff = 0; poff < len; poff += ret) { + ret = iomap_apply(inode, folio_pos(folio) + poff, len - poff, + 0, ops, &ctx, iomap_readpage_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); - SetPageError(page); + folio_set_error(folio); break; } } if (ctx.bio) { submit_bio(ctx.bio); - WARN_ON_ONCE(!ctx.cur_page_in_bio); + WARN_ON_ONCE(!ctx.cur_folio_in_bio); } else { - WARN_ON_ONCE(ctx.cur_page_in_bio); - unlock_page(page); + WARN_ON_ONCE(ctx.cur_folio_in_bio); + folio_unlock(folio); } /* @@ -348,15 +348,15 @@ iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(pos + done) == 0) { - if (!ctx->cur_page_in_bio) - unlock_page(ctx->cur_page); - put_page(ctx->cur_page); - ctx->cur_page = NULL; + if (ctx->cur_folio && + offset_in_folio(ctx->cur_folio, pos + done) == 0) { + if (!ctx->cur_folio_in_bio) + folio_unlock(ctx->cur_folio); + ctx->cur_folio = NULL; } - if (!ctx->cur_page) { - ctx->cur_page = readahead_page(ctx->rac); - ctx->cur_page_in_bio = false; + if (!ctx->cur_folio) { + ctx->cur_folio = readahead_folio(ctx->rac); + ctx->cur_folio_in_bio = false; } ret = iomap_readpage_actor(inode, pos + done, length - done, ctx, iomap, srcmap); @@ -404,10 +404,9 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops) if (ctx.bio) submit_bio(ctx.bio); - if (ctx.cur_page) { - if (!ctx.cur_page_in_bio) - unlock_page(ctx.cur_page); - put_page(ctx.cur_page); + if (ctx.cur_folio) { + if (!ctx.cur_folio_in_bio) + folio_unlock(ctx.cur_folio); } } EXPORT_SYMBOL_GPL(iomap_readahead); From patchwork Thu Jul 15 03:36:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CD49C47E4B for ; Thu, 15 Jul 2021 04:59:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 24FF5611AD for ; Thu, 15 Jul 2021 04:59:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24FF5611AD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6CCC68D006B; Thu, 15 Jul 2021 00:59:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A3E68D0065; Thu, 15 Jul 2021 00:59:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56C608D006B; Thu, 15 Jul 2021 00:59:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 2CDE48D0065 for ; Thu, 15 Jul 2021 00:59:46 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 87050235C1 for ; Thu, 15 Jul 2021 04:59:43 +0000 (UTC) X-FDA: 78363619446.06.D8673F5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 3C242700178D for ; Thu, 15 Jul 2021 04:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7lTXz8qKWpvI3xOInmpk72M0koZDIoK0yX/to4rhQ1E=; b=ZWsSi9JuYa7IMG6cWID1nDtJ/e LJ8mmucJ1MPwjZtaZ1tIyFV0fdJzc9vmy+BLlqfFOYTECl+eADIifZTGtM+gtWn1RbrXnbBIm7cfA ERyHI3+Fvy36Qzjp/ep4kE+GJTS/+KZmqr5DyFxIvoNaqV+LVB5nu47VVDZHxtg+pndYUiGbsfWXU hdqrMI1Skp5ba0ZOhsyujYkM3ncIuopwf+00tu2FJVuBP0tPxpWKyT8HG8uxM7NBYhnUmtMKywVfj UO1SA1BQNt3GDtRh6vbVrLHI6Ved7wiK2QvJQXO5c62WkRrVN2TazlZ8ATBDofsKc4PtVokHBCGRl FbjYJAzw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tRc-002zS9-7q; Thu, 15 Jul 2021 04:58:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 101/138] iomap: Convert iomap_page_mkwrite to use a folio Date: Thu, 15 Jul 2021 04:36:27 +0100 Message-Id: <20210715033704.692967-102-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZWsSi9Ju; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: rb8ur9jn5i93hup48zqkd1t33xajmm18 X-Rspamd-Queue-Id: 3C242700178D X-HE-Tag: 1626325183-182490 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we write to any page in a folio, we have to mark the entire folio as dirty, and potentially COW the entire folio, because it'll all get written back as one unit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 7c702d6c2f64..a3fe0d36c739 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -951,23 +951,23 @@ iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, } EXPORT_SYMBOL_GPL(iomap_truncate_page); -static loff_t -iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, - void *data, struct iomap *iomap, struct iomap *srcmap) +static loff_t iomap_folio_mkwrite_actor(struct inode *inode, loff_t pos, + loff_t length, void *data, struct iomap *iomap, + struct iomap *srcmap) { - struct page *page = data; - struct folio *folio = page_folio(page); + struct folio *folio = data; int ret; if (iomap->flags & IOMAP_F_BUFFER_HEAD) { - ret = __block_write_begin_int(page, pos, length, NULL, iomap); + ret = __block_write_begin_int(&folio->page, pos, length, NULL, + iomap); if (ret) return ret; - block_commit_write(page, 0, length); + block_commit_write(&folio->page, 0, length); } else { - WARN_ON_ONCE(!PageUptodate(page)); + WARN_ON_ONCE(!folio_test_uptodate(folio)); iomap_page_create(inode, folio); - set_page_dirty(page); + folio_mark_dirty(folio); } return length; @@ -975,33 +975,33 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) { - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); struct inode *inode = file_inode(vmf->vma->vm_file); - unsigned long length; - loff_t offset; + size_t length; + loff_t pos; ssize_t ret; - lock_page(page); - ret = page_mkwrite_check_truncate(page, inode); + folio_lock(folio); + ret = folio_mkwrite_check_truncate(folio, inode); if (ret < 0) goto out_unlock; length = ret; - offset = page_offset(page); + pos = folio_pos(folio); while (length > 0) { - ret = iomap_apply(inode, offset, length, - IOMAP_WRITE | IOMAP_FAULT, ops, page, - iomap_page_mkwrite_actor); + ret = iomap_apply(inode, pos, length, + IOMAP_WRITE | IOMAP_FAULT, ops, folio, + iomap_folio_mkwrite_actor); if (unlikely(ret <= 0)) goto out_unlock; - offset += ret; + pos += ret; length -= ret; } - wait_for_stable_page(page); + folio_wait_stable(folio); return VM_FAULT_LOCKED; out_unlock: - unlock_page(page); + folio_unlock(folio); return block_page_mkwrite_return(ret); } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); From patchwork Thu Jul 15 03:36:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 634E6C07E96 for ; Thu, 15 Jul 2021 05:00:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0CFC6611AD for ; Thu, 15 Jul 2021 05:00:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CFC6611AD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E1958D006C; Thu, 15 Jul 2021 01:00:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 591788D0065; Thu, 15 Jul 2021 01:00:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 432438D006C; Thu, 15 Jul 2021 01:00:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 16A428D0065 for ; Thu, 15 Jul 2021 01:00:18 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F273C824556B for ; Thu, 15 Jul 2021 05:00:16 +0000 (UTC) X-FDA: 78363620832.02.64DE0D6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 86A68B000814 for ; Thu, 15 Jul 2021 05:00:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9HKLvlz7zVTwBIOVo7c/wMHDJHKm1KxRIyfN8qxLj0A=; b=hBpwOZyoFwGxTMHGpTvihwBHOt ydal5ikUFxFsfarnu2J3LxPZxnOFV1U6BesWk7wmDyg6r5dWHqZXu+Jln5LRUlJtX/HgQjnZs5m4J +XPhykd72E9mYOvyKRu173sjJoItTg9b+TMiTeUwFv6z0M1YKzQ/HCuFar+mDINJ/rL/orJoUPdw5 M0X7e17lydk6FfiAZ3Vmfjx56MBtiktAYy4Lo1FynH6QZsXa+gDUDFEQUejgm64UL6efg55F/c62+ Q277pEiHajE58yNIL0gKh6ez3tLnbFGwyKaoiu0ZJ29HGjftYg4wac99EZvYkg2kJrSbTZXP//h/X tHgW+Ocg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tSp-002zWM-GL; Thu, 15 Jul 2021 04:59:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 102/138] iomap: Convert iomap_write_begin and iomap_write_end to folios Date: Thu, 15 Jul 2021 04:36:28 +0100 Message-Id: <20210715033704.692967-103-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 86A68B000814 X-Stat-Signature: knhmg6bmm833hihn1sc8mtph5uyejhoq Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hBpwOZyo; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626325216-712972 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions still only work in PAGE_SIZE chunks, but there are fewer conversions from head to tail pages as a result of this patch. Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot --- fs/iomap/buffered-io.c | 68 ++++++++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index a3fe0d36c739..5e0aa23d4693 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -541,9 +541,8 @@ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, - struct page *page, struct iomap *srcmap) + struct folio *folio, struct iomap *srcmap) { - struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); loff_t block_size = i_blocksize(inode); loff_t block_start = round_down(pos, block_size); @@ -583,12 +582,14 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, return 0; } -static int -iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, - struct page **pagep, struct iomap *iomap, struct iomap *srcmap) +static int iomap_write_begin(struct inode *inode, loff_t pos, size_t len, + unsigned flags, struct folio **foliop, struct iomap *iomap, + struct iomap *srcmap) { const struct iomap_page_ops *page_ops = iomap->page_ops; + struct folio *folio; struct page *page; + unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; BUG_ON(pos + len > iomap->offset + iomap->length); @@ -604,30 +605,31 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, return status; } - page = grab_cache_page_write_begin(inode->i_mapping, pos >> PAGE_SHIFT, - AOP_FLAG_NOFS); - if (!page) { + folio = __filemap_get_folio(inode->i_mapping, pos >> PAGE_SHIFT, fgp, + mapping_gfp_mask(inode->i_mapping)); + if (!folio) { status = -ENOMEM; goto out_no_page; } + page = folio_file_page(folio, pos >> PAGE_SHIFT); if (srcmap->type == IOMAP_INLINE) iomap_read_inline_data(inode, page, srcmap); else if (iomap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(page, pos, len, NULL, srcmap); else - status = __iomap_write_begin(inode, pos, len, flags, page, + status = __iomap_write_begin(inode, pos, len, flags, folio, srcmap); if (unlikely(status)) goto out_unlock; - *pagep = page; + *foliop = folio; return 0; out_unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); iomap_write_failed(inode, pos, len); out_no_page: @@ -637,11 +639,10 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, } static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - flush_dcache_page(page); + flush_dcache_folio(folio); /* * The blocks that were entirely written will now be uptodate, so we @@ -654,10 +655,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, * uptodate page as a zero-length write, and force the caller to redo * the whole thing. */ - if (unlikely(copied < len && !PageUptodate(page))) + if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); - __set_page_dirty_nobuffers(page); + filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -680,9 +681,10 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, /* Returns the number of bytes copied. May be 0. Cannot be an errno. */ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page, struct iomap *iomap, + size_t copied, struct folio *folio, struct iomap *iomap, struct iomap *srcmap) { + struct page *page = folio_file_page(folio, pos / PAGE_SIZE); const struct iomap_page_ops *page_ops = iomap->page_ops; loff_t old_size = inode->i_size; size_t ret; @@ -693,7 +695,7 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, ret = block_write_end(NULL, inode->i_mapping, pos, len, copied, page, NULL); } else { - ret = __iomap_write_end(inode, pos, len, copied, page); + ret = __iomap_write_end(inode, pos, len, copied, folio); } /* @@ -705,13 +707,13 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, i_size_write(inode, pos + ret); iomap->flags |= IOMAP_F_SIZE_CHANGED; } - unlock_page(page); + folio_unlock(folio); if (old_size < pos) pagecache_isize_extended(inode, old_size, pos); if (page_ops && page_ops->page_done) page_ops->page_done(inode, pos, ret, page, iomap); - put_page(page); + folio_put(folio); if (ret < len) iomap_write_failed(inode, pos, len); @@ -727,6 +729,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ssize_t written = 0; do { + struct folio *folio; struct page *page; unsigned long offset; /* Offset into pagecache page */ unsigned long bytes; /* Bytes to write to page */ @@ -750,18 +753,19 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, break; } - status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, + status = iomap_write_begin(inode, pos, bytes, 0, &folio, iomap, srcmap); if (unlikely(status)) break; + page = folio_file_page(folio, pos / PAGE_SIZE); if (mapping_writably_mapped(inode->i_mapping)) flush_dcache_page(page); copied = copy_page_from_iter_atomic(page, offset, bytes, i); - status = iomap_write_end(inode, pos, bytes, copied, page, iomap, - srcmap); + status = iomap_write_end(inode, pos, bytes, copied, folio, + iomap, srcmap); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -825,14 +829,14 @@ iomap_unshare_actor(struct inode *inode, loff_t pos, loff_t length, void *data, do { unsigned long offset = offset_in_page(pos); unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); - struct page *page; + struct folio *folio; status = iomap_write_begin(inode, pos, bytes, - IOMAP_WRITE_F_UNSHARE, &page, iomap, srcmap); + IOMAP_WRITE_F_UNSHARE, &folio, iomap, srcmap); if (unlikely(status)) return status; - status = iomap_write_end(inode, pos, bytes, bytes, page, iomap, + status = iomap_write_end(inode, pos, bytes, bytes, folio, iomap, srcmap); if (WARN_ON_ONCE(status == 0)) return -EIO; @@ -871,19 +875,19 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 iomap_zero(struct inode *inode, loff_t pos, u64 length, struct iomap *iomap, struct iomap *srcmap) { - struct page *page; + struct folio *folio; int status; unsigned offset = offset_in_page(pos); unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); - status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap); + status = iomap_write_begin(inode, pos, bytes, 0, &folio, iomap, srcmap); if (status) return status; - zero_user(page, offset, bytes); - mark_page_accessed(page); + zero_user(folio_file_page(folio, pos / PAGE_SIZE), offset, bytes); + folio_mark_accessed(folio); - return iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); + return iomap_write_end(inode, pos, bytes, bytes, folio, iomap, srcmap); } static loff_t iomap_zero_range_actor(struct inode *inode, loff_t pos, From patchwork Thu Jul 15 03:36:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6333BC07E96 for ; Thu, 15 Jul 2021 05:01:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 13A1F61360 for ; Thu, 15 Jul 2021 05:01:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13A1F61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6A3FC8D006D; Thu, 15 Jul 2021 01:01:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 653B68D0065; Thu, 15 Jul 2021 01:01:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F4D68D006D; Thu, 15 Jul 2021 01:01:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 2F4AD8D0065 for ; Thu, 15 Jul 2021 01:01:11 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 19F9A211DE for ; Thu, 15 Jul 2021 05:01:10 +0000 (UTC) X-FDA: 78363623100.24.0A4AB80 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id CFA6CD00AFE8 for ; Thu, 15 Jul 2021 05:01:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VuqrnoOJwk1QQMLBYra8dBPuvmqfF7lT7RNddxEL2SQ=; b=tzV6SIFI8DPTW8BXavSKgzU1IM tSwhFhX8ZA4jGS++xv0+1EHf8+l+542sE21kQSkm8UcboVdS4jS+ZqYJHDftOYCzto1uvoKtGq/6a l2/RSc2CeDMJ02e1CWsh8ZsHUFM1xm149k/OTth+kolzcNOK2/3yUHTrnFeAhTpCxhvaagpxm0Arz ZnOqtwRTRtq2nJw3mWjEfFFNQoILM+NRPgNjxGRpLQzY9GP7mtd1H9l7Ksnqhcl2etctSC1itH7AD fUfcpC/iAbTC+Px0LrrJRzqgdoz+8lxHEq5vU3Pv7+DXOP5+00dsovlqPjHoRq4i3D8i/WjZoCyWK eF6qCr2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tTO-002zZK-9y; Thu, 15 Jul 2021 04:59:58 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 103/138] iomap: Convert iomap_read_inline_data to take a folio Date: Thu, 15 Jul 2021 04:36:29 +0100 Message-Id: <20210715033704.692967-104-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CFA6CD00AFE8 X-Stat-Signature: bf4647m37gg3ksh8jf46t4goa3wxaa6i Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tzV6SIFI; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626325269-621594 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline data is restricted to being less than a page in size, so we don't need to handle multi-page folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5e0aa23d4693..c616ef1feb21 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -194,24 +194,24 @@ struct iomap_readpage_ctx { struct readahead_control *rac; }; -static void -iomap_read_inline_data(struct inode *inode, struct page *page, +static void iomap_read_inline_data(struct inode *inode, struct folio *folio, struct iomap *iomap) { size_t size = i_size_read(inode); void *addr; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return; - BUG_ON(page->index); + BUG_ON(folio->index); + BUG_ON(folio_multi(folio)); BUG_ON(size > PAGE_SIZE - offset_in_page(iomap->inline_data)); - addr = kmap_atomic(page); + addr = kmap_local_folio(folio, 0); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); - kunmap_atomic(addr); - SetPageUptodate(page); + kunmap_local(addr); + folio_mark_uptodate(folio); } static inline bool iomap_block_needs_zeroing(struct inode *inode, @@ -236,7 +236,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iomap->type == IOMAP_INLINE) { WARN_ON_ONCE(pos); - iomap_read_inline_data(inode, &folio->page, iomap); + iomap_read_inline_data(inode, folio, iomap); return PAGE_SIZE; } @@ -614,7 +614,7 @@ static int iomap_write_begin(struct inode *inode, loff_t pos, size_t len, page = folio_file_page(folio, pos >> PAGE_SHIFT); if (srcmap->type == IOMAP_INLINE) - iomap_read_inline_data(inode, page, srcmap); + iomap_read_inline_data(inode, folio, srcmap); else if (iomap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(page, pos, len, NULL, srcmap); else From patchwork Thu Jul 15 03:36:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CDC5C07E96 for ; Thu, 15 Jul 2021 05:02:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 352F761360 for ; Thu, 15 Jul 2021 05:02:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 352F761360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 876F48D006E; Thu, 15 Jul 2021 01:02:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84D908D0065; Thu, 15 Jul 2021 01:02:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 716098D006E; Thu, 15 Jul 2021 01:02:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id 512068D0065 for ; Thu, 15 Jul 2021 01:02:33 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 46F92235C7 for ; Thu, 15 Jul 2021 05:02:32 +0000 (UTC) X-FDA: 78363626544.03.BD0DDA5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 1334CE001824 for ; Thu, 15 Jul 2021 05:02:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+TBu3dsQSGib/JvdjedtN222MCjABnCjiAwlgdbT2YY=; b=lE8VmvNdrwB6S/ZdOcsq631oFl sbTbqHu27mfxXUDoKk+B0IZfyzeJxkjhEiDVXCCGD0zFsU/hckDQqmicWGSwm+1hShkHT1H38IVmC jH2gAm5+XTIXiL3ommIvl0O/ldcK/m7L5LlIqKxEdcFptaWCOM0XlMfH3eYJDmmmmtpI9ktiijaWf xT2HZSDXm/fJ+Kpf7EU4QuCBUwaiuMy783aprUpB9GLTB7+m10JDQLj9Ug2SUtOCaUcz+vqrB65a+ h2jClrCWUi1OcEGhatiq7jvGO8ZbQVK0xwHzorSScpNxgQWPyoL01TFgQkXzcD/dQxsX+YAQJI026 AieOnykQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tTv-002zcT-SX; Thu, 15 Jul 2021 05:00:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 104/138] iomap: Convert iomap_write_end_inline to take a folio Date: Thu, 15 Jul 2021 04:36:30 +0100 Message-Id: <20210715033704.692967-105-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 1334CE001824 X-Stat-Signature: jizg3gsa9j6t8bucspiei4c6m3rzeoxo Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lE8VmvNd; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626325351-101575 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Inline data only occupies a single page, but using a folio means that we don't need to call compound_head() in PageUptodate(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c616ef1feb21..ac33f19325ab 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -662,18 +662,18 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, return copied; } -static size_t iomap_write_end_inline(struct inode *inode, struct page *page, +static size_t iomap_write_end_inline(struct inode *inode, struct folio *folio, struct iomap *iomap, loff_t pos, size_t copied) { void *addr; - WARN_ON_ONCE(!PageUptodate(page)); + WARN_ON_ONCE(!folio_test_uptodate(folio)); BUG_ON(pos + copied > PAGE_SIZE - offset_in_page(iomap->inline_data)); - flush_dcache_page(page); - addr = kmap_atomic(page); + flush_dcache_folio(folio); + addr = kmap_local_folio(folio, 0); memcpy(iomap->inline_data + pos, addr + pos, copied); - kunmap_atomic(addr); + kunmap_local(addr); mark_inode_dirty(inode); return copied; @@ -690,7 +690,7 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t ret; if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(inode, page, iomap, pos, copied); + ret = iomap_write_end_inline(inode, folio, iomap, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, inode->i_mapping, pos, len, copied, page, NULL); From patchwork Thu Jul 15 03:36:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB9FAC07E96 for ; Thu, 15 Jul 2021 05:02:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67FBC61360 for ; Thu, 15 Jul 2021 05:02:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67FBC61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BD20F8D006F; Thu, 15 Jul 2021 01:02:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA85F8D0065; Thu, 15 Jul 2021 01:02:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A49778D006F; Thu, 15 Jul 2021 01:02:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 814278D0065 for ; Thu, 15 Jul 2021 01:02:56 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5E4FA1801EEAB for ; Thu, 15 Jul 2021 05:02:55 +0000 (UTC) X-FDA: 78363627510.02.8F5D6A9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 1B8541004E61 for ; Thu, 15 Jul 2021 05:02:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A28VF1ga2D5pXbLpq4HnSrnPKhnOIS7VvZ/zTGy1Z10=; b=daMa+SG6BIJ38p4Eb+mWiWxZkU QzRYQpL3OWp3+TOoFAgMQ1s5gYEK33ZDHaX5+dONeBg2dcFSgwCpqV4B8DzsKn8G/mjPpLMBXXOIm TVLPdfAkuFeiIjrvT9bfBxn5y6mFbmC0jQ/SdtH3ZgU6J6p7hvsWM5+uTkaCdHNJ6iR3kg7Qy+taL 2HJWrNMHclC02OC5JkMQ/tgbwN9Tato7yxjOfFrKZWknDIHIGHRrB92jvmHhHCA31geSb7y/UHOdR /cl1/mfUIamwlbDScgjD/11FA7r9O28k315P4qH4/AjU0eRDvv9n77qOl9EVQUUl0097+Cyy9kbP5 LrAF4w5w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tUi-002zfK-Jg; Thu, 15 Jul 2021 05:01:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 105/138] iomap: Convert iomap_add_to_ioend to take a folio Date: Thu, 15 Jul 2021 04:36:31 +0100 Message-Id: <20210715033704.692967-106-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=daMa+SG6; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 5s1bxmzfxkqroj19zwe565s5349kpwq9 X-Rspamd-Queue-Id: 1B8541004E61 X-HE-Tag: 1626325375-922679 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We still iterate one block at a time, but now we call compound_head() less often. Rename file_offset to pos to fit the rest of the file. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 66 +++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 36 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ac33f19325ab..8e767aec8d07 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1252,36 +1252,29 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * first, otherwise finish off the current ioend and start another. */ static void -iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, +iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct iomap_page *iop, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { - sector_t sector = iomap_sector(&wpc->iomap, offset); + sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); - bool merged, same_page = false; + size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc); + wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); } - merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, - &same_page); if (iop) atomic_add(len, &iop->write_bytes_pending); - - if (!merged) { - if (bio_full(wpc->ioend->io_bio, len)) { - wpc->ioend->io_bio = - iomap_chain_bio(wpc->ioend->io_bio); - } - bio_add_page(wpc->ioend->io_bio, page, len, poff); + if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { + wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); + bio_add_folio(wpc->ioend->io_bio, folio, len, poff); } wpc->ioend->io_size += len; - wbc_account_cgroup_owner(wbc, page, len); + wbc_account_cgroup_owner(wbc, &folio->page, len); } /* @@ -1309,40 +1302,41 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct iomap_page *iop = to_iomap_page(folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + unsigned nblocks = i_blocks_per_folio(inode, folio); + loff_t pos = folio_pos(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(nblocks > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. + * Walk through the folio to find areas to write back. If we + * run off the end of the current map or find the current map + * invalid, grab a new one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0; i < nblocks; i++, pos += len) { + if (pos >= end_offset) + break; if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, &submit_list); count++; } WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); - WARN_ON_ONCE(!PageLocked(page)); - WARN_ON_ONCE(PageWriteback(page)); - WARN_ON_ONCE(PageDirty(page)); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(folio_test_writeback(folio)); + WARN_ON_ONCE(folio_test_dirty(folio)); /* * We cannot cancel the ioend directly here on error. We may have @@ -1358,16 +1352,16 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * now. */ if (wpc->ops->discard_page) - wpc->ops->discard_page(page, file_offset); + wpc->ops->discard_page(&folio->page, pos); if (!count) { - ClearPageUptodate(page); - unlock_page(page); + folio_clear_uptodate(folio); + folio_unlock(folio); goto done; } } - set_page_writeback(page); - unlock_page(page); + folio_start_writeback(folio); + folio_unlock(folio); /* * Preserve the original error if there was one, otherwise catch @@ -1388,9 +1382,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * with a partial page truncate on a sub-page block sized filesystem. */ if (!count) - end_page_writeback(page); + folio_end_writeback(folio); done: - mapping_set_error(page->mapping, error); + mapping_set_error(folio->mapping, error); return error; } From patchwork Thu Jul 15 03:36:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16947C07E96 for ; Thu, 15 Jul 2021 05:04:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4D7A60FEE for ; Thu, 15 Jul 2021 05:04:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4D7A60FEE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F04C88D0070; Thu, 15 Jul 2021 01:04:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDAE58D0065; Thu, 15 Jul 2021 01:04:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA3528D0070; Thu, 15 Jul 2021 01:04:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id B76F08D0065 for ; Thu, 15 Jul 2021 01:04:25 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AD1A3824556B for ; Thu, 15 Jul 2021 05:04:24 +0000 (UTC) X-FDA: 78363631248.28.EFA4C94 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 71EEFB002440 for ; Thu, 15 Jul 2021 05:04:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gu1Z3XqRKb+rloR7NF87yPK5izpZ/yC9NJg+qYpfw98=; b=TAFgcrCvzXXA4srg3IvIupsHeL IM7JIq52TmAhqjOx/WI8ivOAvkBDDeifjOJ0osrkzqaH4mYNQd9sQMLD5PZN+KK090B+KJPHXwqSj Uc7hmiM29WTJL+29Ojsi+1bUL+5MELuCBMYLZb6TVaxILF1buOy5oRXhVlYuawspLHuZAlb9z05xb ZjKH4FgKd3TnUZpHdmONZDNGiIlsVLsIBq644UZamwSNUex0z5l16lz2eagsSl5mFoTTDmNcbOwn/ ODewKvXLl+MVppJvIJySLBslWmcIQtJmIEk/98pdZLQrhHRbOdBhMm0JpE2pqWcAlz8BFfwxplIR0 50kOTACw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tVu-002zjB-Jb; Thu, 15 Jul 2021 05:02:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 106/138] iomap: Convert iomap_do_writepage to use a folio Date: Thu, 15 Jul 2021 04:36:32 +0100 Message-Id: <20210715033704.692967-107-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 71EEFB002440 X-Stat-Signature: of1oyk4af9qo4qfom88mpqbbkggb6sij Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TAFgcrCv; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626325464-123298 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writeback an entire folio at a time, and adjust some of the variables to have more familiar names. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 49 +++++++++++++++++++----------------------- 1 file changed, 22 insertions(+), 27 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8e767aec8d07..0731e2c3f44b 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1296,9 +1296,8 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_offset) + struct folio *folio, loff_t end_pos) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); @@ -1316,7 +1315,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * invalid, grab a new one. */ for (i = 0; i < nblocks; i++, pos += len) { - if (pos >= end_offset) + if (pos >= end_pos) break; if (iop && !test_bit(i, iop->uptodate)) continue; @@ -1398,16 +1397,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, static int iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { + struct folio *folio = page_folio(page); struct iomap_writepage_ctx *wpc = data; - struct inode *inode = page->mapping->host; - pgoff_t end_index; - u64 end_offset; - loff_t offset; + struct inode *inode = folio->mapping->host; + loff_t end_pos, isize; - trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); + trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); /* - * Refuse to write the page out if we are called from reclaim context. + * Refuse to write the folio out if we are called from reclaim context. * * This avoids stack overflows when called from deeply used stacks in * random callers for direct reclaim or memcg reclaim. We explicitly @@ -1421,10 +1419,10 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) goto redirty; /* - * Is this page beyond the end of the file? + * Is this folio beyond the end of the file? * - * The page index is less than the end_index, adjust the end_offset - * to the highest offset that this page should represent. + * The folio index is less than the end_index, adjust the end_pos + * to the highest offset that this folio should represent. * ----------------------------------------------------- * | file mapping | | * ----------------------------------------------------- @@ -1433,11 +1431,9 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | desired writeback range | see else | * ---------------------------------^------------------| */ - offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + isize = i_size_read(inode); + end_pos = folio_pos(folio) + folio_size(folio); + if (end_pos - 1 >= isize) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1449,7 +1445,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_folio(folio, isize); + pgoff_t end_index = isize >> PAGE_SHIFT; /* * Skip the page if it is fully outside i_size, e.g. due to a @@ -1468,8 +1465,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * if the page to write is totally beyond the i_size or if it's * offset is just equal to the EOF. */ - if (page->index > end_index || - (page->index == end_index && offset_into_page == 0)) + if (folio->index > end_index || + (folio->index == end_index && poff == 0)) goto redirty; /* @@ -1480,17 +1477,15 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); - - /* Adjust the end_offset to the end of file */ - end_offset = offset; + zero_user_segment(&folio->page, poff, folio_size(folio)); + end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_offset); + return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); redirty: - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); return 0; } From patchwork Thu Jul 15 03:36:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 035FAC47E48 for ; Thu, 15 Jul 2021 05:05:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A447561360 for ; Thu, 15 Jul 2021 05:05:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A447561360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 036658D0071; Thu, 15 Jul 2021 01:05:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00CFE8D0065; Thu, 15 Jul 2021 01:05:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF04F8D0071; Thu, 15 Jul 2021 01:05:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id BC0678D0065 for ; Thu, 15 Jul 2021 01:05:09 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B00F2824556B for ; Thu, 15 Jul 2021 05:05:08 +0000 (UTC) X-FDA: 78363633096.12.337C36C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 678CBB0000A2 for ; Thu, 15 Jul 2021 05:05:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=c36GOq3ah392G52Mqq66mFAtLZcpjZrfdzLMICjvPFs=; b=YwT0jqDhJvXc5/f3HD+OitpY4f pNf6MDEPHbmGWM9Ggc0O4Hfd+irDhP4GlVbaF7bYE3PLBaUqkaLaIHhDz6UZleZX2ycYTy1c0JTT3 mNR2bDzDqjHsDshQiYshegUdevPC9QYZu62ARm1vikdb+8k3ARD48g/Vlr9O63RT4kd3OcnwURu6P 4HpBvlp7nzz5MhbGdQ6F2aG2D13vjEECGwbZA7bgmdex91PmnQF1w9CZ93ROkADCk9wniJv17/PhC nEUUoi106lgtgEESzisJniapfRWek8pEu8dW1ksLUpyYwvj7bQqH6SShgHN9SUYP1JhMRkC3Zb/+0 XS8gh84w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tWr-002zn9-ST; Thu, 15 Jul 2021 05:03:38 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 107/138] iomap: Convert iomap_migrate_page to use folios Date: Thu, 15 Jul 2021 04:36:33 +0100 Message-Id: <20210715033704.692967-108-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 678CBB0000A2 X-Stat-Signature: 8khqzicnpsxir86uhoinh1jzmwu617or Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YwT0jqDh; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626325508-628826 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The arguments are still pages for now, but we can use folios internally and cut out a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 0731e2c3f44b..48de198c5603 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -490,19 +490,21 @@ int iomap_migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, 0); + ret = folio_migrate_mapping(mapping, newfolio, folio, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; - if (page_has_private(page)) - attach_page_private(newpage, detach_page_private(page)); + if (folio_test_private(folio)) + folio_attach_private(newfolio, folio_detach_private(folio)); if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + folio_migrate_copy(newfolio, folio); else - migrate_page_states(newpage, page); + folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; } EXPORT_SYMBOL_GPL(iomap_migrate_page); From patchwork Thu Jul 15 03:36:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2DE1C47E4B for ; Thu, 15 Jul 2021 05:06:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4E41F6102A for ; Thu, 15 Jul 2021 05:06:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E41F6102A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9D1548D0072; Thu, 15 Jul 2021 01:06:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 980EA8D0065; Thu, 15 Jul 2021 01:06:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822588D0072; Thu, 15 Jul 2021 01:06:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 563E88D0065 for ; Thu, 15 Jul 2021 01:06:34 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 448A418EDE for ; Thu, 15 Jul 2021 05:06:33 +0000 (UTC) X-FDA: 78363636666.29.1712D86 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id F1D7190000A8 for ; Thu, 15 Jul 2021 05:06:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2kO2DGhsWpCEZQZSuKBdU48h/VIyaOm4/uZr2l+t0xI=; b=HSHshhoPypa7G76ECE/ojc5dqA KLh/vHqG9HVhsGikLecnTX4MFIv7Hnn9WCQYDjWTq2okVsZPNwyuaQPzjnpiF6j2J/va075rL9RuM AreyvSS8bkKNbAZuDs6sSrR/e3bSFfMii4BrMMt5Jmhg+yAzEXh03mzc8D09lNSQncGFni/6iWHJh ABvsfk5drBhjR46SDlRWevk5gulri/Yx6gDBTpzlc9Ro59bsv2GV7qXoFZxm8S272m4wO8QvODiPv 1XmyELguUzQjTRhQGoPbYAFwBtipeMveIwxti0aZnsoSXaJnO6SVeI7qwZ5HlqpvGYRSUTlrsUA4H oWXP50+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tXy-002zqN-79; Thu, 15 Jul 2021 05:04:57 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 108/138] mm/filemap: Convert page_cache_delete to take a folio Date: Thu, 15 Jul 2021 04:36:34 +0100 Message-Id: <20210715033704.692967-109-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HSHshhoP; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: F1D7190000A8 X-Stat-Signature: 9937fue5ck3ikfx1axtbu9z3ayyjt7ey X-HE-Tag: 1626325592-987276 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It was already assuming a head page, so this is a straightforward conversion. Convert the one caller to call page_folio(), even though it must currently be passing in a head page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 0434c5a55fec..c96febad32fc 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -120,27 +120,26 @@ */ static void page_cache_delete(struct address_space *mapping, - struct page *page, void *shadow) + struct folio *folio, void *shadow) { - XA_STATE(xas, &mapping->i_pages, page->index); + XA_STATE(xas, &mapping->i_pages, folio->index); unsigned int nr = 1; mapping_set_update(&xas, mapping); /* hugetlb pages are represented by a single entry in the xarray */ - if (!PageHuge(page)) { - xas_set_order(&xas, page->index, compound_order(page)); - nr = compound_nr(page); + if (!folio_test_hugetlb(folio)) { + xas_set_order(&xas, folio->index, folio_order(folio)); + nr = folio_nr_pages(folio); } - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(nr != 1 && shadow, page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(nr != 1 && shadow, folio); xas_store(&xas, shadow); xas_init_marks(&xas); - page->mapping = NULL; + folio->mapping = NULL; /* Leave page->index set: truncation lookup relies upon it */ mapping->nrpages -= nr; } @@ -222,12 +221,13 @@ static void unaccount_page_cache_page(struct address_space *mapping, */ void __delete_from_page_cache(struct page *page, void *shadow) { + struct folio *folio = page_folio(page); struct address_space *mapping = page->mapping; trace_mm_filemap_delete_from_page_cache(page); unaccount_page_cache_page(mapping, page); - page_cache_delete(mapping, page, shadow); + page_cache_delete(mapping, folio, shadow); } static void page_cache_free_page(struct address_space *mapping, From patchwork Thu Jul 15 03:36:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60A0DC07E96 for ; Thu, 15 Jul 2021 05:07:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0FFFF61362 for ; Thu, 15 Jul 2021 05:07:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0FFFF61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FFC48D0073; Thu, 15 Jul 2021 01:07:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D6448D0065; Thu, 15 Jul 2021 01:07:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 429A78D0073; Thu, 15 Jul 2021 01:07:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 181AB8D0065 for ; Thu, 15 Jul 2021 01:07:20 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 052578245571 for ; Thu, 15 Jul 2021 05:07:19 +0000 (UTC) X-FDA: 78363638598.36.1AA9C18 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id AF714B0000A2 for ; Thu, 15 Jul 2021 05:07:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rvTEmkdIr5f6LLESFOwDOAEdh30d4SNN6vwFx0IhGis=; b=XuEKhBf3eFELCw4F5wXteSTCxG 1jIIBkJfEGyJT10SVbrL+vFAi8IOQ91QCCy7bl8OVATIKjHqeiraaWpWdf/aWPpcc1y/KAUDAzl1J G2ozbJeZNJWmacacGLMNjtt6nk9MI6fJMae6AOQFY6CKv+BWxwfOomvpTbCaiBNuQ1ci3XMeFWvC9 VlOwq+sXlnnShFTKPy+tQ6tgBsoP+t3ETx8BtA8NNDYay3I5nrLLqmB9aMS7yElz3op1MUbEv8JbF uI8p245a3LXl+OUeLwJg5uslsguNdMwiEsr+DsqJS2BoEUSned4sYVVkXUWuc4xCgKcuxI/cGdG+t eeRSdt+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tZE-002zuf-P2; Thu, 15 Jul 2021 05:06:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 109/138] mm/filemap: Convert unaccount_page_cache_page to filemap_unaccount_folio Date: Thu, 15 Jul 2021 04:36:35 +0100 Message-Id: <20210715033704.692967-110-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AF714B0000A2 X-Stat-Signature: rgjai714fo9ypfu7p51eaiiya8dt3waf Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XuEKhBf3; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626325638-972532 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use folios throughout filemap_unaccount_folio(), except for the bug handling path which would need to use total_mapcount(), which is currently only defined for builds with THP enabled. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 5 --- mm/filemap.c | 68 ++++++++++++++++++++--------------------- 2 files changed, 34 insertions(+), 39 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 83c1a798265f..f6a2a2589009 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -786,11 +786,6 @@ static inline void __set_page_dirty(struct page *page, } void folio_account_cleaned(struct folio *folio, struct address_space *mapping, struct bdi_writeback *wb); -static inline void account_page_cleaned(struct page *page, - struct address_space *mapping, struct bdi_writeback *wb) -{ - return folio_account_cleaned(page_folio(page), mapping, wb); -} void __folio_cancel_dirty(struct folio *folio); static inline void folio_cancel_dirty(struct folio *folio) { diff --git a/mm/filemap.c b/mm/filemap.c index c96febad32fc..6e8b195edf19 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -144,8 +144,8 @@ static void page_cache_delete(struct address_space *mapping, mapping->nrpages -= nr; } -static void unaccount_page_cache_page(struct address_space *mapping, - struct page *page) +static void filemap_unaccount_folio(struct address_space *mapping, + struct folio *folio) { int nr; @@ -154,64 +154,64 @@ static void unaccount_page_cache_page(struct address_space *mapping, * invalidate any existing cleancache entries. We can't leave * stale data around in the cleancache once our page is gone */ - if (PageUptodate(page) && PageMappedToDisk(page)) - cleancache_put_page(page); + if (folio_test_uptodate(folio) && folio_test_mappedtodisk(folio)) + cleancache_put_page(&folio->page); else - cleancache_invalidate_page(mapping, page); + cleancache_invalidate_page(mapping, &folio->page); - VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(page_mapped(page), page); - if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(page_mapped(page))) { + VM_BUG_ON_FOLIO(folio_mapped(folio), folio); + if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(folio_mapped(folio))) { int mapcount; pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n", - current->comm, page_to_pfn(page)); - dump_page(page, "still mapped when deleted"); + current->comm, folio_pfn(folio)); + dump_page(&folio->page, "still mapped when deleted"); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); - mapcount = page_mapcount(page); + mapcount = page_mapcount(&folio->page); if (mapping_exiting(mapping) && - page_count(page) >= mapcount + 2) { + folio_ref_count(folio) >= mapcount + 2) { /* * All vmas have already been torn down, so it's - * a good bet that actually the page is unmapped, + * a good bet that actually the folio is unmapped, * and we'd prefer not to leak it: if we're wrong, * some other bad page check should catch it later. */ - page_mapcount_reset(page); - page_ref_sub(page, mapcount); + page_mapcount_reset(&folio->page); + folio_ref_sub(folio, mapcount); } } - /* hugetlb pages do not participate in page cache accounting. */ - if (PageHuge(page)) + /* hugetlb folios do not participate in page cache accounting. */ + if (folio_test_hugetlb(folio)) return; - nr = thp_nr_pages(page); + nr = folio_nr_pages(folio); - __mod_lruvec_page_state(page, NR_FILE_PAGES, -nr); - if (PageSwapBacked(page)) { - __mod_lruvec_page_state(page, NR_SHMEM, -nr); - if (PageTransHuge(page)) - __mod_lruvec_page_state(page, NR_SHMEM_THPS, -nr); - } else if (PageTransHuge(page)) { - __mod_lruvec_page_state(page, NR_FILE_THPS, -nr); + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); + if (folio_test_swapbacked(folio)) { + __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); + if (folio_multi(folio)) + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); + } else if (folio_multi(folio)) { + __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } /* - * At this point page must be either written or cleaned by - * truncate. Dirty page here signals a bug and loss of + * At this point folio must be either written or cleaned by + * truncate. Dirty folio here signals a bug and loss of * unwritten data. * - * This fixes dirty accounting after removing the page entirely - * but leaves PageDirty set: it has no effect for truncated - * page and anyway will be cleared before returning page into + * This fixes dirty accounting after removing the folio entirely + * but leaves the dirty flag set: it has no effect for truncated + * folio and anyway will be cleared before returning folio to * buddy allocator. */ - if (WARN_ON_ONCE(PageDirty(page))) - account_page_cleaned(page, mapping, inode_to_wb(mapping->host)); + if (WARN_ON_ONCE(folio_test_dirty(folio))) + folio_account_cleaned(folio, mapping, + inode_to_wb(mapping->host)); } /* @@ -226,7 +226,7 @@ void __delete_from_page_cache(struct page *page, void *shadow) trace_mm_filemap_delete_from_page_cache(page); - unaccount_page_cache_page(mapping, page); + filemap_unaccount_folio(mapping, folio); page_cache_delete(mapping, folio, shadow); } @@ -344,7 +344,7 @@ void delete_from_page_cache_batch(struct address_space *mapping, for (i = 0; i < pagevec_count(pvec); i++) { trace_mm_filemap_delete_from_page_cache(pvec->pages[i]); - unaccount_page_cache_page(mapping, pvec->pages[i]); + filemap_unaccount_folio(mapping, page_folio(pvec->pages[i])); } page_cache_delete_batch(mapping, pvec); xa_unlock_irqrestore(&mapping->i_pages, flags); From patchwork Thu Jul 15 03:36:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8738C07E96 for ; Thu, 15 Jul 2021 05:07:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C11961360 for ; Thu, 15 Jul 2021 05:07:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C11961360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A1D8F8D0074; Thu, 15 Jul 2021 01:07:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CD578D0065; Thu, 15 Jul 2021 01:07:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 848858D0074; Thu, 15 Jul 2021 01:07:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 621818D0065 for ; Thu, 15 Jul 2021 01:07:45 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5880A1E090 for ; Thu, 15 Jul 2021 05:07:44 +0000 (UTC) X-FDA: 78363639648.34.77349A8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 206C8D0000A4 for ; Thu, 15 Jul 2021 05:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FGRQxEUJ6p7lAjmMrwkwbOhtlThDd4ZGqnuMn7lRPjw=; b=Twu+i2VIXdsztOcHuYD/zyamT+ CG6mZCrrZ9MhOz+mP5Sh2denNSlnJdLuoMS6x6Baa52aeeCI6fyJ7kFJPM+zr0hfFGdLAy3sB+yB4 P8w2gonxmwKk6hc22LQ9Qr2X0OkJ8vX3stalQTYZIWqJQiPBMXw3J//b+Wddd9NtrfJUlKDLe1FND HJrQGg+ynAC78GcRjg+UopPTP5cG791NI9GcT0E10Q/X3VxaNfQnoo/rVGmNayZgmEQfXzWI5ywgN MLQhA91Pq9pnF9YqNw8WrXqQiwah5fvFAhsPXJ9CTEuF/kw49V/m1kAUoakH5BWQPzOCrfBupbr3U MvBf3IBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3ta0-002zwc-VF; Thu, 15 Jul 2021 05:06:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 110/138] mm/filemap: Add filemap_remove_folio and __filemap_remove_folio Date: Thu, 15 Jul 2021 04:36:36 +0100 Message-Id: <20210715033704.692967-111-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Twu+i2VI; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 206C8D0000A4 X-Stat-Signature: yt88wuukjxap8djjh35ys9gcnsaewh3b X-HE-Tag: 1626325664-651500 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement __delete_from_page_cache() as a wrapper around __filemap_remove_folio() and delete_from_page_cache() as a wrapper around filemap_remove_folio(). Remove the EXPORT_SYMBOL as delete_from_page_cache() was not used by any in-tree modules. Convert page_cache_free_page() into filemap_free_folio(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 9 +++++++-- mm/filemap.c | 44 ++++++++++++++++++++--------------------- mm/folio-compat.c | 5 +++++ 3 files changed, 33 insertions(+), 25 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index f6a2a2589009..245554ce6b12 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -877,8 +877,13 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp); int filemap_add_folio(struct address_space *mapping, struct folio *folio, pgoff_t index, gfp_t gfp); -extern void delete_from_page_cache(struct page *page); -extern void __delete_from_page_cache(struct page *page, void *shadow); +void filemap_remove_folio(struct folio *folio); +void delete_from_page_cache(struct page *page); +void __filemap_remove_folio(struct folio *folio, void *shadow); +static inline void __delete_from_page_cache(struct page *page, void *shadow) +{ + __filemap_remove_folio(page_folio(page), shadow); +} void replace_page_cache_page(struct page *old, struct page *new); void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); diff --git a/mm/filemap.c b/mm/filemap.c index 6e8b195edf19..4a81eaff363e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -219,55 +219,53 @@ static void filemap_unaccount_folio(struct address_space *mapping, * sure the page is locked and that nobody else uses it - or that usage * is safe. The caller must hold the i_pages lock. */ -void __delete_from_page_cache(struct page *page, void *shadow) +void __filemap_remove_folio(struct folio *folio, void *shadow) { - struct folio *folio = page_folio(page); - struct address_space *mapping = page->mapping; + struct address_space *mapping = folio->mapping; - trace_mm_filemap_delete_from_page_cache(page); + trace_mm_filemap_delete_from_page_cache(&folio->page); filemap_unaccount_folio(mapping, folio); page_cache_delete(mapping, folio, shadow); } -static void page_cache_free_page(struct address_space *mapping, - struct page *page) +static void filemap_free_folio(struct address_space *mapping, + struct folio *folio) { void (*freepage)(struct page *); freepage = mapping->a_ops->freepage; if (freepage) - freepage(page); + freepage(&folio->page); - if (PageTransHuge(page) && !PageHuge(page)) { - page_ref_sub(page, thp_nr_pages(page)); - VM_BUG_ON_PAGE(page_count(page) <= 0, page); + if (folio_multi(folio) && !folio_test_hugetlb(folio)) { + folio_ref_sub(folio, folio_nr_pages(folio)); + VM_BUG_ON_FOLIO(folio_ref_count(folio) <= 0, folio); } else { - put_page(page); + folio_put(folio); } } /** - * delete_from_page_cache - delete page from page cache - * @page: the page which the kernel is trying to remove from page cache + * filemap_remove_folio - Remove folio from page cache. + * @folio: The folio. * - * This must be called only on pages that have been verified to be in the page - * cache and locked. It will never put the page into the free list, the caller - * has a reference on the page. + * This must be called only on folios that are locked and have been + * verified to be in the page cache. It will never put the folio into + * the free list because the caller has a reference on the page. */ -void delete_from_page_cache(struct page *page) +void filemap_remove_folio(struct folio *folio) { - struct address_space *mapping = page_mapping(page); + struct address_space *mapping = folio->mapping; unsigned long flags; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); xa_lock_irqsave(&mapping->i_pages, flags); - __delete_from_page_cache(page, NULL); + __filemap_remove_folio(folio, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); - page_cache_free_page(mapping, page); + filemap_free_folio(mapping, folio); } -EXPORT_SYMBOL(delete_from_page_cache); /* * page_cache_delete_batch - delete several pages from page cache @@ -350,7 +348,7 @@ void delete_from_page_cache_batch(struct address_space *mapping, xa_unlock_irqrestore(&mapping->i_pages, flags); for (i = 0; i < pagevec_count(pvec); i++) - page_cache_free_page(mapping, pvec->pages[i]); + filemap_free_folio(mapping, page_folio(pvec->pages[i])); } int filemap_check_errors(struct address_space *mapping) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 5b6ae1da314e..749a695b4217 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -140,3 +140,8 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping, mapping_gfp_mask(mapping)); } EXPORT_SYMBOL(grab_cache_page_write_begin); + +void delete_from_page_cache(struct page *page) +{ + return filemap_remove_folio(page_folio(page)); +} From patchwork Thu Jul 15 03:36:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9329BC1B08C for ; Thu, 15 Jul 2021 05:08:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 397AE61360 for ; Thu, 15 Jul 2021 05:08:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 397AE61360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 90B6A8D0075; Thu, 15 Jul 2021 01:08:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E1B88D0065; Thu, 15 Jul 2021 01:08:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 734DA8D0075; Thu, 15 Jul 2021 01:08:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 4C9028D0065 for ; Thu, 15 Jul 2021 01:08:30 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3584A22C01 for ; Thu, 15 Jul 2021 05:08:29 +0000 (UTC) X-FDA: 78363641538.12.7428456 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id B5F251003EF5 for ; Thu, 15 Jul 2021 05:08:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n/5eY7pce2Jho4fMkKIU8aLMl2r5vP0UXOIuLv30wmQ=; b=BdZNpa3Eq3XRBXHRRr2nzyWdKE PkcBR3ts80+1MBf3cKKWModR4gsEVOMw6vUy8jBft6y5MbRqlsdWBCLpf64xAuwK+nB4Ca7mSv906 vKBs+We6CMct4uTPgtDlTRbeSbPz+n1IbVyIfQUVM034p6x6zC8pbo/u4ZpO+NIIOYlkVcaDwHT/q 1bpqS4njFeYA66yDzZ+t1woTrNm5hj5GOejgKg2G3mxrLALX2vDWbmiCmWUM9pVFPkTGB7xU5UFHm IW2WZAYMWkF72duNAz/4tSf5BuQ+L9N+UR5qe5jxUHLAt8f2Z+BKe5Q6vF0hJgxoLCflchgvM+vhY hBl9U3SA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3taf-002zyN-Jx; Thu, 15 Jul 2021 05:07:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 111/138] mm/filemap: Convert find_get_entry to return a folio Date: Thu, 15 Jul 2021 04:36:37 +0100 Message-Id: <20210715033704.692967-112-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B5F251003EF5 X-Stat-Signature: 6pjz9i556tzaz8z4tkat8gi8dnefz9eu Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BdZNpa3E; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626325708-245869 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers to cope. Saves 580 bytes of kernel text; all five callers are reduced in size. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 129 +++++++++++++++++++++++++-------------------------- 1 file changed, 64 insertions(+), 65 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 4a81eaff363e..c4190c0a6d86 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1907,37 +1907,36 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, } EXPORT_SYMBOL(__filemap_get_folio); -static inline struct page *find_get_entry(struct xa_state *xas, pgoff_t max, +static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, xa_mark_t mark) { - struct page *page; + struct folio *folio; retry: if (mark == XA_PRESENT) - page = xas_find(xas, max); + folio = xas_find(xas, max); else - page = xas_find_marked(xas, max, mark); + folio = xas_find_marked(xas, max, mark); - if (xas_retry(xas, page)) + if (xas_retry(xas, folio)) goto retry; /* * A shadow entry of a recently evicted page, a swap * entry from shmem/tmpfs or a DAX entry. Return it * without attempting to raise page count. */ - if (!page || xa_is_value(page)) - return page; + if (!folio || xa_is_value(folio)) + return folio; - if (!page_cache_get_speculative(page)) + if (!folio_try_get_rcu(folio)) goto reset; - /* Has the page moved or been split? */ - if (unlikely(page != xas_reload(xas))) { - put_page(page); + if (unlikely(folio != xas_reload(xas))) { + folio_put(folio); goto reset; } - return page; + return folio; reset: xas_reset(xas); goto retry; @@ -1978,7 +1977,7 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, unsigned nr_entries = PAGEVEC_SIZE; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { + while ((page = &find_get_entry(&xas, end, XA_PRESENT)->page)) { /* * Terminate early on finding a THP, to allow the caller to * handle it all at once; but continue if this is hugetlbfs. @@ -2025,38 +2024,38 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices) { XA_STATE(xas, &mapping->i_pages, start); - struct page *page; + struct folio *folio; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { - if (!xa_is_value(page)) { - if (page->index < start) + while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { + if (!xa_is_value(folio)) { + if (folio->index < start) goto put; - VM_BUG_ON_PAGE(page->index != xas.xa_index, page); - if (page->index + thp_nr_pages(page) - 1 > end) + VM_BUG_ON_FOLIO(folio->index != xas.xa_index, + folio); + if (folio->index + folio_nr_pages(folio) - 1 > end) goto put; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto put; - if (page->mapping != mapping || PageWriteback(page)) + if (folio->mapping != mapping || + folio_test_writeback(folio)) goto unlock; - VM_BUG_ON_PAGE(!thp_contains(page, xas.xa_index), - page); + VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), + folio); } indices[pvec->nr] = xas.xa_index; - if (!pagevec_add(pvec, page)) + if (!pagevec_add(pvec, &folio->page)) break; goto next; unlock: - unlock_page(page); + folio_unlock(folio); put: - put_page(page); + folio_put(folio); next: - if (!xa_is_value(page) && PageTransHuge(page)) { - unsigned int nr_pages = thp_nr_pages(page); - - /* Final THP may cross MAX_LFS_FILESIZE on 32-bit */ - xas_set(&xas, page->index + nr_pages); - if (xas.xa_index < nr_pages) + if (!xa_is_value(folio) && folio_multi(folio)) { + xas_set(&xas, folio->index + folio_nr_pages(folio)); + /* Did we wrap on 32-bit? */ + if (!xas.xa_index) break; } } @@ -2091,19 +2090,19 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, struct page **pages) { XA_STATE(xas, &mapping->i_pages, *start); - struct page *page; + struct folio *folio; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { + while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { /* Skip over shadow, swap and DAX entries */ - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; - pages[ret] = find_subpage(page, xas.xa_index); + pages[ret] = folio_file_page(folio, xas.xa_index); if (++ret == nr_pages) { *start = xas.xa_index + 1; goto out; @@ -2200,25 +2199,25 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, struct page **pages) { XA_STATE(xas, &mapping->i_pages, *index); - struct page *page; + struct folio *folio; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, tag))) { + while ((folio = find_get_entry(&xas, end, tag))) { /* * Shadow entries should never be tagged, but this iteration * is lockless so there is a window for page reclaim to evict * a page we saw tagged. Skip over it. */ - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; - pages[ret] = page; + pages[ret] = &folio->page; if (++ret == nr_pages) { - *index = page->index + thp_nr_pages(page); + *index = folio->index + folio_nr_pages(folio); goto out; } } @@ -2697,44 +2696,44 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) } EXPORT_SYMBOL(generic_file_read_iter); -static inline loff_t page_seek_hole_data(struct xa_state *xas, - struct address_space *mapping, struct page *page, +static inline loff_t folio_seek_hole_data(struct xa_state *xas, + struct address_space *mapping, struct folio *folio, loff_t start, loff_t end, bool seek_data) { const struct address_space_operations *ops = mapping->a_ops; size_t offset, bsz = i_blocksize(mapping->host); - if (xa_is_value(page) || PageUptodate(page)) + if (xa_is_value(folio) || folio_test_uptodate(folio)) return seek_data ? start : end; if (!ops->is_partially_uptodate) return seek_data ? end : start; xas_pause(xas); rcu_read_unlock(); - lock_page(page); - if (unlikely(page->mapping != mapping)) + folio_lock(folio); + if (unlikely(folio->mapping != mapping)) goto unlock; - offset = offset_in_thp(page, start) & ~(bsz - 1); + offset = offset_in_folio(folio, start) & ~(bsz - 1); do { - if (ops->is_partially_uptodate(page, offset, bsz) == seek_data) + if (ops->is_partially_uptodate(&folio->page, offset, bsz) == + seek_data) break; start = (start + bsz) & ~(bsz - 1); offset += bsz; - } while (offset < thp_size(page)); + } while (offset < folio_size(folio)); unlock: - unlock_page(page); + folio_unlock(folio); rcu_read_lock(); return start; } -static inline -unsigned int seek_page_size(struct xa_state *xas, struct page *page) +static inline size_t seek_folio_size(struct xa_state *xas, struct folio *folio) { - if (xa_is_value(page)) + if (xa_is_value(folio)) return PAGE_SIZE << xa_get_order(xas->xa, xas->xa_index); - return thp_size(page); + return folio_size(folio); } /** @@ -2761,15 +2760,15 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, XA_STATE(xas, &mapping->i_pages, start >> PAGE_SHIFT); pgoff_t max = (end - 1) >> PAGE_SHIFT; bool seek_data = (whence == SEEK_DATA); - struct page *page; + struct folio *folio; if (end <= start) return -ENXIO; rcu_read_lock(); - while ((page = find_get_entry(&xas, max, XA_PRESENT))) { - loff_t pos = (u64)xas.xa_index << PAGE_SHIFT; - unsigned int seek_size; + while ((folio = find_get_entry(&xas, max, XA_PRESENT))) { + loff_t pos = xas.xa_index * PAGE_SIZE; + size_t seek_size; if (start < pos) { if (!seek_data) @@ -2777,9 +2776,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, start = pos; } - seek_size = seek_page_size(&xas, page); - pos = round_up(pos + 1, seek_size); - start = page_seek_hole_data(&xas, mapping, page, start, pos, + seek_size = seek_folio_size(&xas, folio); + pos = round_up((u64)pos + 1, seek_size); + start = folio_seek_hole_data(&xas, mapping, folio, start, pos, seek_data); if (start < pos) goto unlock; @@ -2787,15 +2786,15 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, break; if (seek_size > PAGE_SIZE) xas_set(&xas, pos >> PAGE_SHIFT); - if (!xa_is_value(page)) - put_page(page); + if (!xa_is_value(folio)) + folio_put(folio); } if (seek_data) start = -ENXIO; unlock: rcu_read_unlock(); - if (page && !xa_is_value(page)) - put_page(page); + if (folio && !xa_is_value(folio)) + folio_put(folio); if (start > end) return end; return start; From patchwork Thu Jul 15 03:36:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A30C07E96 for ; Thu, 15 Jul 2021 05:09:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E7AAC61362 for ; Thu, 15 Jul 2021 05:09:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7AAC61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 428258D0076; Thu, 15 Jul 2021 01:09:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FF2D8D0065; Thu, 15 Jul 2021 01:09:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C8418D0076; Thu, 15 Jul 2021 01:09:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 093EF8D0065 for ; Thu, 15 Jul 2021 01:09:18 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F1B6618022CFD for ; Thu, 15 Jul 2021 05:09:16 +0000 (UTC) X-FDA: 78363643512.10.7DB3AEE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id B0AA65000097 for ; Thu, 15 Jul 2021 05:09:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YqRnP/8a/779PEXECZaJ/RwhwSMj3E4PlE13UIsVZRY=; b=DCrOOoW08pAgZsmPiZpMkbts8P kgrCJi35u5FsXVXQauh0DHyp0lN1fxW0PAF52MVLeXfiux3cFFf01ksA+5BRcUwodT23tDcfA9eK8 OBP3+y6QJ7HpE3vdigyHC3Xa2z7/jdhBaqD/m1QSf++2B2TH9BKVpBBwL7E1yHMUlTiEbH4+Hyn+t T35XIkkul1kql/e7o2CRrw+Yu9ja75HGkXFSLNvD/mLeDZYeGsyWLYYkquA/gebxhcZpW/TqmfrJD n8pwbiSIvtkIOUfOisIiMKAgdjskcSXHuvZV2eEfJBgBMT9YBxc0m6biY0CZQfWbo4JdGlax39L70 HGm1Rd8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tbP-00301t-O5; Thu, 15 Jul 2021 05:08:16 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 112/138] mm/filemap: Convert filemap_get_read_batch to use folios Date: Thu, 15 Jul 2021 04:36:38 +0100 Message-Id: <20210715033704.692967-113-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DCrOOoW0; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 6ib4aps4g1w86rwtciw9ydipomiue88k X-Rspamd-Queue-Id: B0AA65000097 X-Rspamd-Server: rspam01 X-HE-Tag: 1626325756-720970 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page cache only stores folios, never tail pages. Saves 29 bytes due to removing calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index c4190c0a6d86..04501bf50448 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2272,32 +2272,31 @@ static void filemap_get_read_batch(struct address_space *mapping, pgoff_t index, pgoff_t max, struct pagevec *pvec) { XA_STATE(xas, &mapping->i_pages, index); - struct page *head; + struct folio *folio; rcu_read_lock(); - for (head = xas_load(&xas); head; head = xas_next(&xas)) { - if (xas_retry(&xas, head)) + for (folio = xas_load(&xas); folio; folio = xas_next(&xas)) { + if (xas_retry(&xas, folio)) continue; - if (xas.xa_index > max || xa_is_value(head)) + if (xas.xa_index > max || xa_is_value(folio)) break; - if (!page_cache_get_speculative(head)) + if (!folio_try_get_rcu(folio)) goto retry; - /* Has the page moved or been split? */ - if (unlikely(head != xas_reload(&xas))) + if (unlikely(folio != xas_reload(&xas))) goto put_page; - if (!pagevec_add(pvec, head)) + if (!pagevec_add(pvec, &folio->page)) break; - if (!PageUptodate(head)) + if (!folio_test_uptodate(folio)) break; - if (PageReadahead(head)) + if (folio_test_readahead(folio)) break; - xas.xa_index = head->index + thp_nr_pages(head) - 1; + xas.xa_index = folio->index + folio_nr_pages(folio) - 1; xas.xa_offset = (xas.xa_index >> xas.xa_shift) & XA_CHUNK_MASK; continue; put_page: - put_page(head); + folio_put(folio); retry: xas_reset(&xas); } From patchwork Thu Jul 15 03:36:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78E29C47E48 for ; Thu, 15 Jul 2021 05:10:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 206A161278 for ; Thu, 15 Jul 2021 05:10:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 206A161278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E6248D0077; Thu, 15 Jul 2021 01:10:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BCF28D0065; Thu, 15 Jul 2021 01:10:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 585198D0077; Thu, 15 Jul 2021 01:10:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 36E6F8D0065 for ; Thu, 15 Jul 2021 01:10:18 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2B7A518473E57 for ; Thu, 15 Jul 2021 05:10:17 +0000 (UTC) X-FDA: 78363646074.06.9A2069D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id DE9AED0000B6 for ; Thu, 15 Jul 2021 05:10:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iKO5l/n8beUKjqGQiuX0uQAJJo3Y4Gq1sxQOC/iyLJw=; b=VQ6KJ5bT1WcA4EmMH3cUm6l73X 1hoRrYguQh0g6AOJSYTXkmwwZ8RRPzgQW4+heHreZZ9LogBBNeHVEYSi3cR9FaRmP9eXy89yQ4MQZ dUE0uhgWWl3/0N9zqCC4s4w0k9ug5oWRSgqWQ577cmmVHwJK9U14E42ITuiYnYUBpx13ytWDvWBwB K+Kq6a+SYLNmn6fh8+jesyqVAx4mUuMzaOjCuOFhBDwBRt83C9EsaEhyiM1XNmpLY5ChXeCTail4j Um9B7H1ikUIwJ540h7SYq436wyWFRLxs80X4tS91wI6mdRYBmd8tQjrDWPhui7Pgmdzj38VYHIQUF bobh9/GQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tbv-00308H-1C; Thu, 15 Jul 2021 05:08:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 113/138] mm/filemap: Convert find_get_pages_contig to folios Date: Thu, 15 Jul 2021 04:36:39 +0100 Message-Id: <20210715033704.692967-114-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VQ6KJ5bT; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: h599tcr1biwedeeneqrr6eqqj1o84bb7 X-Rspamd-Queue-Id: DE9AED0000B6 X-HE-Tag: 1626325816-159300 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: None of the callers of find_get_pages_contig() want tail pages. They all use order-0 pages today, but if they were converted, they'd want folios. So just remove the call to find_subpage() instead of replacing it with folio_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 04501bf50448..5a273d07eae6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2141,36 +2141,35 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, unsigned int nr_pages, struct page **pages) { XA_STATE(xas, &mapping->i_pages, index); - struct page *page; + struct folio *folio; unsigned int ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - for (page = xas_load(&xas); page; page = xas_next(&xas)) { - if (xas_retry(&xas, page)) + for (folio = xas_load(&xas); folio; folio = xas_next(&xas)) { + if (xas_retry(&xas, folio)) continue; /* * If the entry has been swapped out, we can stop looking. * No current caller is looking for DAX entries. */ - if (xa_is_value(page)) + if (xa_is_value(folio)) break; - if (!page_cache_get_speculative(page)) + if (!folio_try_get_rcu(folio)) goto retry; - /* Has the page moved or been split? */ - if (unlikely(page != xas_reload(&xas))) + if (unlikely(folio != xas_reload(&xas))) goto put_page; - pages[ret] = find_subpage(page, xas.xa_index); + pages[ret] = &folio->page; if (++ret == nr_pages) break; continue; put_page: - put_page(page); + folio_put(folio); retry: xas_reset(&xas); } From patchwork Thu Jul 15 03:36:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0763BC07E96 for ; Thu, 15 Jul 2021 05:11:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8AC9761360 for ; Thu, 15 Jul 2021 05:11:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8AC9761360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE98E8D0078; Thu, 15 Jul 2021 01:11:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC0768D0065; Thu, 15 Jul 2021 01:11:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C888F8D0078; Thu, 15 Jul 2021 01:11:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id A7D228D0065 for ; Thu, 15 Jul 2021 01:11:20 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A086D14509 for ; Thu, 15 Jul 2021 05:11:19 +0000 (UTC) X-FDA: 78363648678.13.56F9B06 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 4B82D9000253 for ; Thu, 15 Jul 2021 05:11:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=d3v7ENPKJDaWtjDqS2P9Smk/DURpXOWjWSUWHrOg3iA=; b=f6ycUXKhFnoFfq7Y74deRLZWHu 9dBoRIjg2Eq0RYBWV/q+4/Yu+jPmrLfuP4m+PRTmT3CciTrrrAOoqPcX4XCiKwQ2E7iz4Tmt5p9e2 /EbQ1yEE+5IhEP56vicLCUaAr95IX/LE4lmzDLtbPF3fIGEcdXG0ijdxkV6idcAJBZi2AAUwFsinw iFlwopLKh5pRK/a/kF5UDPrQcEdmGYUuzPiRnlrTx6/3kwTM+BFJ2qd0tU18RMezMgxyuqmMHgOVF pNKa2FT8fT0RAYLho3M7IgmINGk4E+o+e7CKTe6xtd9mKIGAbGf47BLhXXHUzLsuCxYvJizvHW+/b y/Vf9E3w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3td1-0030Aj-Bo; Thu, 15 Jul 2021 05:09:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 114/138] mm/filemap: Convert filemap_read_page to take a folio Date: Thu, 15 Jul 2021 04:36:40 +0100 Message-Id: <20210715033704.692967-115-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=f6ycUXKh; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: ggog3c8gmkhizn4g4nygea1tae6ra1yc X-Rspamd-Queue-Id: 4B82D9000253 X-Rspamd-Server: rspam01 X-HE-Tag: 1626325879-59037 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: One of the callers already had a folio; the other two grow by a few bytes, but filemap_read_page() shrinks by 50 bytes for a net reduction of 27 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 5a273d07eae6..5e2a2db1c715 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2302,8 +2302,8 @@ static void filemap_get_read_batch(struct address_space *mapping, rcu_read_unlock(); } -static int filemap_read_page(struct file *file, struct address_space *mapping, - struct page *page) +static int filemap_read_folio(struct file *file, struct address_space *mapping, + struct folio *folio) { int error; @@ -2312,16 +2312,16 @@ static int filemap_read_page(struct file *file, struct address_space *mapping, * eg. multipath errors. PG_error will be set again if readpage * fails. */ - ClearPageError(page); + folio_clear_error(folio); /* Start the actual read. The read will unlock the page. */ - error = mapping->a_ops->readpage(file, page); + error = mapping->a_ops->readpage(file, &folio->page); if (error) return error; - error = wait_on_page_locked_killable(page); + error = folio_wait_locked_killable(folio); if (error) return error; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; shrink_readahead_size_eio(&file->f_ra); return -EIO; @@ -2383,7 +2383,7 @@ static int filemap_update_page(struct kiocb *iocb, if (iocb->ki_flags & (IOCB_NOIO | IOCB_NOWAIT | IOCB_WAITQ)) goto unlock; - error = filemap_read_page(iocb->ki_filp, mapping, &folio->page); + error = filemap_read_folio(iocb->ki_filp, mapping, folio); if (error == AOP_TRUNCATED_PAGE) folio_put(folio); return error; @@ -2414,7 +2414,7 @@ static int filemap_create_page(struct file *file, if (error) goto error; - error = filemap_read_page(file, mapping, page); + error = filemap_read_folio(file, mapping, page_folio(page)); if (error) goto error; @@ -3043,7 +3043,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * and we need to check for errors. */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - error = filemap_read_page(file, mapping, page); + error = filemap_read_folio(file, mapping, page_folio(page)); if (fpin) goto out_retry; put_page(page); From patchwork Thu Jul 15 03:36:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7107FC07E96 for ; Thu, 15 Jul 2021 05:12:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 010B061278 for ; Thu, 15 Jul 2021 05:12:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 010B061278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5AAEE8D0079; Thu, 15 Jul 2021 01:12:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 55AAD8D0065; Thu, 15 Jul 2021 01:12:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 422E48D0079; Thu, 15 Jul 2021 01:12:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 2077B8D0065 for ; Thu, 15 Jul 2021 01:12:11 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 092DB2311F for ; Thu, 15 Jul 2021 05:12:10 +0000 (UTC) X-FDA: 78363650820.05.C5B6752 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id B9762F000095 for ; Thu, 15 Jul 2021 05:12:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9RHYJbjPvnqicnRJcqR3syd4A7W3DXyDdvohNQa07JI=; b=Zj8UrZa7kSb9ZwkzDckQztJeFe R4tDSJ8RkEAG1IPNgOo8dEsqucfYgosHHX6oGjHL2aHTMwidm6W6Habvk1966Gz3QJXSE+lNfFKDg wq7lMx/m8maoTG3TzNBAbEf1ooCAxFRk6BooKtLto9oqR/ppDlR8JRnPylYlwzV9xZQyHQzLpWoPp i/v87a/rqNdTD/TDp1QsJsXztDv88/I0NnpnKgX8wCNm77Xn0VQx3bsQ30LI6ghO53pNoGzUpmfRk +2RTV7VdZbJ6hx6ggCR4PT6RpXWff9FfYis4+Jr/2jXsePOfjkUtzSrrpx0VtUPFSScpHHZ4iPbtL Wdw2zBzg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tdc-0030Dm-Nl; Thu, 15 Jul 2021 05:10:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 115/138] mm/filemap: Convert filemap_create_page to folio Date: Thu, 15 Jul 2021 04:36:41 +0100 Message-Id: <20210715033704.692967-116-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Zj8UrZa7; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: ktkwap1ctwrxzxjqfkebmi49cam8xqfc X-Rspamd-Queue-Id: B9762F000095 X-HE-Tag: 1626325929-621672 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is all internal to filemap and saves 100 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 5e2a2db1c715..7eda9afb0600 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2396,32 +2396,32 @@ static int filemap_update_page(struct kiocb *iocb, return error; } -static int filemap_create_page(struct file *file, +static int filemap_create_folio(struct file *file, struct address_space *mapping, pgoff_t index, struct pagevec *pvec) { - struct page *page; + struct folio *folio; int error; - page = page_cache_alloc(mapping); - if (!page) + folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0); + if (!folio) return -ENOMEM; - error = add_to_page_cache_lru(page, mapping, index, + error = filemap_add_folio(mapping, folio, index, mapping_gfp_constraint(mapping, GFP_KERNEL)); if (error == -EEXIST) error = AOP_TRUNCATED_PAGE; if (error) goto error; - error = filemap_read_folio(file, mapping, page_folio(page)); + error = filemap_read_folio(file, mapping, folio); if (error) goto error; - pagevec_add(pvec, page); + pagevec_add(pvec, &folio->page); return 0; error: - put_page(page); + folio_put(folio); return error; } @@ -2463,7 +2463,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, if (!pagevec_count(pvec)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ)) return -EAGAIN; - err = filemap_create_page(filp, mapping, + err = filemap_create_folio(filp, mapping, iocb->ki_pos >> PAGE_SHIFT, pvec); if (err == AOP_TRUNCATED_PAGE) goto retry; From patchwork Thu Jul 15 03:36:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ABD2C47E48 for ; Thu, 15 Jul 2021 05:12:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3F0CA61362 for ; Thu, 15 Jul 2021 05:12:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3F0CA61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 97C418D007A; Thu, 15 Jul 2021 01:12:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9520C8D0065; Thu, 15 Jul 2021 01:12:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81ABB8D007A; Thu, 15 Jul 2021 01:12:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 604818D0065 for ; Thu, 15 Jul 2021 01:12:37 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 56CC11F229 for ; Thu, 15 Jul 2021 05:12:36 +0000 (UTC) X-FDA: 78363651912.05.4C0AF23 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 186F41910 for ; Thu, 15 Jul 2021 05:12:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BzlU35lU3qSym1scpXVdTvYuYkJXSOHQJYSVBLZky0w=; b=PLI9hB/9pIA7VPcXi/xFVWEV67 x2V+RSGP4ZERq8bvKO1ZlPVgTfPgL4/BqiVqzOUzr0g771DkEm/JdrKwoNGrG3r++gEt7BFlcwbvB qO09ISougSNBNxkUPncXDvnzsM2HZjJ0JgsveRcXsEQ6bgjDgPWLl+wtv1/3qR0Yge35Zx5v7mQ8T IuEF984wxw7itozvaqo6ZNkT7L01WtKErO1zqY2dJOLtHnQnHkqksqEDaXULp9ypfFu2Nw6yRr9fZ HUOurRK0NMxJio+7+CRZQ1DijYwgx+s0ovE5ozkgvRt0z3RbbKp87C84XWAqHQNDzhFd3KjnW3crx Tf5sDHtg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tef-0030Gi-7R; Thu, 15 Jul 2021 05:11:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 116/138] mm/filemap: Convert filemap_range_uptodate to folios Date: Thu, 15 Jul 2021 04:36:42 +0100 Message-Id: <20210715033704.692967-117-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 186F41910 X-Stat-Signature: 1i3coe9q5iiwfm34f54b3r748zkmznj6 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="PLI9hB/9"; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626325955-42749 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only caller was already passing a head page, so this simply avoids a call to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 7eda9afb0600..078c318e2f16 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2328,29 +2328,29 @@ static int filemap_read_folio(struct file *file, struct address_space *mapping, } static bool filemap_range_uptodate(struct address_space *mapping, - loff_t pos, struct iov_iter *iter, struct page *page) + loff_t pos, struct iov_iter *iter, struct folio *folio) { int count; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return true; /* pipes can't handle partially uptodate pages */ if (iov_iter_is_pipe(iter)) return false; if (!mapping->a_ops->is_partially_uptodate) return false; - if (mapping->host->i_blkbits >= (PAGE_SHIFT + thp_order(page))) + if (mapping->host->i_blkbits >= (folio_shift(folio))) return false; count = iter->count; - if (page_offset(page) > pos) { - count -= page_offset(page) - pos; + if (folio_pos(folio) > pos) { + count -= folio_pos(folio) - pos; pos = 0; } else { - pos -= page_offset(page); + pos -= folio_pos(folio); } - return mapping->a_ops->is_partially_uptodate(page, pos, count); + return mapping->a_ops->is_partially_uptodate(&folio->page, pos, count); } static int filemap_update_page(struct kiocb *iocb, @@ -2376,7 +2376,7 @@ static int filemap_update_page(struct kiocb *iocb, goto truncated; error = 0; - if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, &folio->page)) + if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, folio)) goto unlock; error = -EAGAIN; From patchwork Thu Jul 15 03:36:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA708C07E96 for ; Thu, 15 Jul 2021 05:13:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 63E8261278 for ; Thu, 15 Jul 2021 05:13:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63E8261278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BC3E58D007B; Thu, 15 Jul 2021 01:13:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B73428D0065; Thu, 15 Jul 2021 01:13:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A14568D007B; Thu, 15 Jul 2021 01:13:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 72E538D0065 for ; Thu, 15 Jul 2021 01:13:32 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6A77322882 for ; Thu, 15 Jul 2021 05:13:31 +0000 (UTC) X-FDA: 78363654222.39.41FB73B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id EA248D0000BD for ; Thu, 15 Jul 2021 05:13:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ubkBUcZkQq7PEz+ieCUNeeq4MwyrT33JYJHMPgq6Lhk=; b=p8ZEbAMZsDVunWkl1areZpIQfW 61kna7QKhWLs4vqHciKHU/MJViW2/ZeWw/M6hg9pH8ewB3y7e1R1tzl1Cw0WXWMIRIVzTVfeK7vTp sZZeeyDOyoOUD1X3GQqjTPm/d6e1T4+ZdrWC+AI5kR8a1tTi1uS1L0WAfLiZRwKyCPsYlFVZaS5wj zAxf7Pw78n5NymBncm4vKDIK/rJbphyJXMegMTI86/87AAH5vRdnVQQLYzIGg+/K6Q5ArQSfO5oXO i8XbmpqehAEG0BHi0VFLg4qwlDbgch2Dc0GRohH3ds0NJqVlzjUhrARu/lrcxAp2eK0geb2qTNkrf oXQppnhw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tfR-0030IM-AL; Thu, 15 Jul 2021 05:12:22 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 117/138] mm/filemap: Convert filemap_fault to folio Date: Thu, 15 Jul 2021 04:36:43 +0100 Message-Id: <20210715033704.692967-118-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EA248D0000BD X-Stat-Signature: 4hry63gx83d4ohowocpdtdds6yu3tsso Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p8ZEbAMZ; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626326010-775317 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of converting back-and-forth between the actual page and the head page, just convert once at the end of the function where we set the vmf->page. Saves 241 bytes of text, or 15% of the size of filemap_fault(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 78 +++++++++++++++++++++++++--------------------------- 1 file changed, 38 insertions(+), 40 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 078c318e2f16..b0fe3234a20b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2801,21 +2801,20 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, #ifdef CONFIG_MMU #define MMAP_LOTSAMISS (100) /* - * lock_page_maybe_drop_mmap - lock the page, possibly dropping the mmap_lock + * lock_folio_maybe_drop_mmap - lock the page, possibly dropping the mmap_lock * @vmf - the vm_fault for this fault. - * @page - the page to lock. + * @folio - the folio to lock. * @fpin - the pointer to the file we may pin (or is already pinned). * - * This works similar to lock_page_or_retry in that it can drop the mmap_lock. - * It differs in that it actually returns the page locked if it returns 1 and 0 - * if it couldn't lock the page. If we did have to drop the mmap_lock then fpin - * will point to the pinned file and needs to be fput()'ed at a later point. + * This works similar to lock_folio_or_retry in that it can drop the + * mmap_lock. It differs in that it actually returns the folio locked + * if it returns 1 and 0 if it couldn't lock the folio. If we did have + * to drop the mmap_lock then fpin will point to the pinned file and + * needs to be fput()'ed at a later point. */ -static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, +static int lock_folio_maybe_drop_mmap(struct vm_fault *vmf, struct folio *folio, struct file **fpin) { - struct folio *folio = page_folio(page); - if (folio_trylock(folio)) return 1; @@ -2904,7 +2903,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) * was pinned if we have to drop the mmap_lock in order to do IO. */ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, - struct page *page) + struct folio *folio) { struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; @@ -2919,10 +2918,10 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, mmap_miss = READ_ONCE(ra->mmap_miss); if (mmap_miss) WRITE_ONCE(ra->mmap_miss, --mmap_miss); - if (PageReadahead(page)) { + if (folio_test_readahead(folio)) { fpin = maybe_unlock_mmap_for_io(vmf, fpin); page_cache_async_readahead(mapping, ra, file, - page, offset, ra->ra_pages); + &folio->page, offset, ra->ra_pages); } return fpin; } @@ -2941,7 +2940,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, * vma->vm_mm->mmap_lock must be held on entry. * * If our return value has VM_FAULT_RETRY set, it's because the mmap_lock - * may be dropped before doing I/O or by lock_page_maybe_drop_mmap(). + * may be dropped before doing I/O or by lock_folio_maybe_drop_mmap(). * * If our return value does not have VM_FAULT_RETRY set, the mmap_lock * has not been released. @@ -2957,58 +2956,57 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) struct file *fpin = NULL; struct address_space *mapping = file->f_mapping; struct inode *inode = mapping->host; - pgoff_t offset = vmf->pgoff; - pgoff_t max_off; - struct page *page; + pgoff_t max_idx, index = vmf->pgoff; + struct folio *folio; vm_fault_t ret = 0; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) + max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + if (unlikely(index >= max_idx)) return VM_FAULT_SIGBUS; /* * Do we have something in the page cache already? */ - page = find_get_page(mapping, offset); - if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) { + folio = filemap_get_folio(mapping, index); + if (likely(folio) && !(vmf->flags & FAULT_FLAG_TRIED)) { /* * We found the page, so try async readahead before * waiting for the lock. */ - fpin = do_async_mmap_readahead(vmf, page); - } else if (!page) { + fpin = do_async_mmap_readahead(vmf, folio); + } else if (!folio) { /* No page in the page cache at all */ count_vm_event(PGMAJFAULT); count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); ret = VM_FAULT_MAJOR; fpin = do_sync_mmap_readahead(vmf); retry_find: - page = pagecache_get_page(mapping, offset, + folio = __filemap_get_folio(mapping, index, FGP_CREAT|FGP_FOR_MMAP, vmf->gfp_mask); - if (!page) { + if (!folio) { if (fpin) goto out_retry; return VM_FAULT_OOM; } } - if (!lock_page_maybe_drop_mmap(vmf, page, &fpin)) + if (!lock_folio_maybe_drop_mmap(vmf, folio, &fpin)) goto out_retry; /* Did it get truncated? */ - if (unlikely(compound_head(page)->mapping != mapping)) { - unlock_page(page); - put_page(page); + if (unlikely(folio->mapping != mapping)) { + folio_unlock(folio); + folio_put(folio); goto retry_find; } - VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page); + VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); /* * We have a locked page in the page cache, now we need to check * that it's up-to-date. If not, it is going to be due to an error. */ - if (unlikely(!PageUptodate(page))) + if (unlikely(!folio_test_uptodate(folio))) goto page_not_uptodate; /* @@ -3017,7 +3015,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * redo the fault. */ if (fpin) { - unlock_page(page); + folio_unlock(folio); goto out_retry; } @@ -3025,14 +3023,14 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * Found the page and have a reference on it. * We must recheck i_size under page lock. */ - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(offset >= max_off)) { - unlock_page(page); - put_page(page); + max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + if (unlikely(index >= max_idx)) { + folio_unlock(folio); + folio_put(folio); return VM_FAULT_SIGBUS; } - vmf->page = page; + vmf->page = folio_file_page(folio, index); return ret | VM_FAULT_LOCKED; page_not_uptodate: @@ -3043,10 +3041,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * and we need to check for errors. */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); - error = filemap_read_folio(file, mapping, page_folio(page)); + error = filemap_read_folio(file, mapping, folio); if (fpin) goto out_retry; - put_page(page); + folio_put(folio); if (!error || error == AOP_TRUNCATED_PAGE) goto retry_find; @@ -3059,8 +3057,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * re-find the vma and come back and find our hopefully still populated * page. */ - if (page) - put_page(page); + if (folio) + folio_put(folio); if (fpin) fput(fpin); return ret | VM_FAULT_RETRY; From patchwork Thu Jul 15 03:36:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E56BEC1B08C for ; Thu, 15 Jul 2021 05:13:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 964B361360 for ; Thu, 15 Jul 2021 05:13:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 964B361360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F1EF18D007C; Thu, 15 Jul 2021 01:13:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA7958D0065; Thu, 15 Jul 2021 01:13:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D48628D007C; Thu, 15 Jul 2021 01:13:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id A8E968D0065 for ; Thu, 15 Jul 2021 01:13:37 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 91FEF231BF for ; Thu, 15 Jul 2021 05:13:36 +0000 (UTC) X-FDA: 78363654432.06.488F4E9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 5727990000AD for ; Thu, 15 Jul 2021 05:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vcxKKKBsxiUw5aygBjrmG9tISrBw5VzufzZExjokD1s=; b=jWy7e8/ISWsQ1oDmgOKfqP8um6 QodRFKeVBs67meId8q+ukEeFe6rB6agsFA6frIw3LEL05TQuxEPj88dbWgfXCwRs+BtSVOsDVf9qt Wk8EznFuh7YIPENActZGNA7yiBdshRs9vqm15Fnp0J2ssJgF4U9at88Q1QfLVqOcehgoi1Y7Vy7/q mc8Rr4dp/zIq6358KXkMO2jby6pZLQynlNwgNFbJ1kYeMwumdCPs8iMkN6rF9/xC9O4aW9cewUJ/E T+JFBlhfGKhBBFyGcHohgyimchJ3p99NNSvoCivE568xe5Aa8zKm6Uqouus4YgURFkeeVwIuuXkNG lLh5cqXQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tfq-0030KF-96; Thu, 15 Jul 2021 05:12:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 118/138] mm/filemap: Add read_cache_folio and read_mapping_folio Date: Thu, 15 Jul 2021 04:36:44 +0100 Message-Id: <20210715033704.692967-119-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="jWy7e8/I"; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: wspb9w3mmeyns7m41zdjjpk4xd99dfu1 X-Rspamd-Queue-Id: 5727990000AD X-Rspamd-Server: rspam01 X-HE-Tag: 1626326016-729481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement read_cache_page() as a wrapper around read_cache_folio(). Saves over 400 bytes of text from do_read_cache_folio() which more thn makes up for the extra 100 bytes of text added to the various wrapper functions. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 12 +++++- mm/filemap.c | 95 +++++++++++++++++++++-------------------- 2 files changed, 59 insertions(+), 48 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 245554ce6b12..842d130fd6d3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -539,8 +539,10 @@ static inline struct page *grab_cache_page(struct address_space *mapping, return find_or_create_page(mapping, index, mapping_gfp_mask(mapping)); } -extern struct page * read_cache_page(struct address_space *mapping, - pgoff_t index, filler_t *filler, void *data); +struct folio *read_cache_folio(struct address_space *, pgoff_t index, + filler_t *filler, void *data); +struct page *read_cache_page(struct address_space *, pgoff_t index, + filler_t *filler, void *data); extern struct page * read_cache_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); extern int read_cache_pages(struct address_space *mapping, @@ -552,6 +554,12 @@ static inline struct page *read_mapping_page(struct address_space *mapping, return read_cache_page(mapping, index, NULL, data); } +static inline struct folio *read_mapping_folio(struct address_space *mapping, + pgoff_t index, void *data) +{ + return read_cache_folio(mapping, index, NULL, data); +} + /* * Get index of the page within radix-tree (but not for hugetlb pages). * (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE) diff --git a/mm/filemap.c b/mm/filemap.c index b0fe3234a20b..dd54a52f8e84 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3298,35 +3298,20 @@ EXPORT_SYMBOL(filemap_page_mkwrite); EXPORT_SYMBOL(generic_file_mmap); EXPORT_SYMBOL(generic_file_readonly_mmap); -static struct page *wait_on_page_read(struct page *page) +static struct folio *do_read_cache_folio(struct address_space *mapping, + pgoff_t index, filler_t filler, void *data, gfp_t gfp) { - if (!IS_ERR(page)) { - wait_on_page_locked(page); - if (!PageUptodate(page)) { - put_page(page); - page = ERR_PTR(-EIO); - } - } - return page; -} - -static struct page *do_read_cache_page(struct address_space *mapping, - pgoff_t index, - int (*filler)(void *, struct page *), - void *data, - gfp_t gfp) -{ - struct page *page; + struct folio *folio; int err; repeat: - page = find_get_page(mapping, index); - if (!page) { - page = __page_cache_alloc(gfp); - if (!page) + folio = filemap_get_folio(mapping, index); + if (!folio) { + folio = filemap_alloc_folio(gfp, 0); + if (!folio) return ERR_PTR(-ENOMEM); - err = add_to_page_cache_lru(page, mapping, index, gfp); + err = filemap_add_folio(mapping, folio, index, gfp); if (unlikely(err)) { - put_page(page); + folio_put(folio); if (err == -EEXIST) goto repeat; /* Presumably ENOMEM for xarray node */ @@ -3335,21 +3320,24 @@ static struct page *do_read_cache_page(struct address_space *mapping, filler: if (filler) - err = filler(data, page); + err = filler(data, &folio->page); else - err = mapping->a_ops->readpage(data, page); + err = mapping->a_ops->readpage(data, &folio->page); if (err < 0) { - put_page(page); + folio_put(folio); return ERR_PTR(err); } - page = wait_on_page_read(page); - if (IS_ERR(page)) - return page; + folio_wait_locked(folio); + if (!folio_test_uptodate(folio)) { + folio_put(folio); + return ERR_PTR(-EIO); + } + goto out; } - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) goto out; /* @@ -3383,23 +3371,23 @@ static struct page *do_read_cache_page(struct address_space *mapping, * avoid spurious serialisations and wakeups when multiple processes * wait on the same page for IO to complete. */ - wait_on_page_locked(page); - if (PageUptodate(page)) + folio_wait_locked(folio); + if (folio_test_uptodate(folio)) goto out; /* Distinguish between all the cases under the safety of the lock */ - lock_page(page); + folio_lock(folio); /* Case c or d, restart the operation */ - if (!page->mapping) { - unlock_page(page); - put_page(page); + if (!folio->mapping) { + folio_unlock(folio); + folio_put(folio); goto repeat; } /* Someone else locked and filled the page in a very small window */ - if (PageUptodate(page)) { - unlock_page(page); + if (folio_test_uptodate(folio)) { + folio_unlock(folio); goto out; } @@ -3409,16 +3397,16 @@ static struct page *do_read_cache_page(struct address_space *mapping, * Clear page error before actual read, PG_error will be * set again if read page fails. */ - ClearPageError(page); + folio_clear_error(folio); goto filler; out: - mark_page_accessed(page); - return page; + folio_mark_accessed(folio); + return folio; } /** - * read_cache_page - read into page cache, fill it if needed + * read_cache_folio - read into page cache, fill it if needed * @mapping: the page's address_space * @index: the page index * @filler: function to perform the read @@ -3431,10 +3419,25 @@ static struct page *do_read_cache_page(struct address_space *mapping, * * Return: up to date page on success, ERR_PTR() on failure. */ +struct folio *read_cache_folio(struct address_space *mapping, pgoff_t index, + filler_t filler, void *data) +{ + return do_read_cache_folio(mapping, index, filler, data, + mapping_gfp_mask(mapping)); +} +EXPORT_SYMBOL(read_cache_folio); + +static struct page *do_read_cache_page(struct address_space *mapping, + pgoff_t index, filler_t *filler, void *data, gfp_t gfp) +{ + struct folio *folio = read_cache_folio(mapping, index, filler, data); + if (IS_ERR(folio)) + return &folio->page; + return folio_file_page(folio, index); +} + struct page *read_cache_page(struct address_space *mapping, - pgoff_t index, - int (*filler)(void *, struct page *), - void *data) + pgoff_t index, filler_t *filler, void *data) { return do_read_cache_page(mapping, index, filler, data, mapping_gfp_mask(mapping)); From patchwork Thu Jul 15 03:36:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 110DDC07E96 for ; Thu, 15 Jul 2021 05:14:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7400E61278 for ; Thu, 15 Jul 2021 05:14:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7400E61278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C85DE8D007D; Thu, 15 Jul 2021 01:14:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5C438D0065; Thu, 15 Jul 2021 01:14:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B24E68D007D; Thu, 15 Jul 2021 01:14:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 8F4238D0065 for ; Thu, 15 Jul 2021 01:14:44 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7D9B782499B9 for ; Thu, 15 Jul 2021 05:14:43 +0000 (UTC) X-FDA: 78363657246.31.A55659E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 2FA0220019E2 for ; Thu, 15 Jul 2021 05:14:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bAkONPxu0GMkDX/bq8jjUzCHYId16qCP3W7dw8YcdLk=; b=e3R2OC2DPgMDiQEr6d/hGEsj4W xmiFus6JiAL76LeVgmLxEGbIJ8CMWzjtNFvKQZs1yInm4h2vD0tyrwUltaBIpQwHiei+E+D5msEcH oIqaDis7iPvmF9QroaXE7NZyTzBBmoZgSKWReuAL9o8W8UmYqS8AgaJ5lNoxsgIUnQKuadAQdA0Nd N1a8LT0Yp83WqUB7633Z1Je89wq0a2AupN1e8OxiJNdUI7rRQOnH/wMCUwqRHXHPiSgUUeICJhZPB RVvg4YF49djVQsk2sWiQhmr4Sx+Fql6P+IhTAJlki5jcZdrZ9Ws1DCEiUsvvFN6kGw1d8qIlLa8xg oiX4SrFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tgS-0030Ln-EX; Thu, 15 Jul 2021 05:13:29 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 119/138] mm/filemap: Convert filemap_get_pages to use folios Date: Thu, 15 Jul 2021 04:36:45 +0100 Message-Id: <20210715033704.692967-120-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=e3R2OC2D; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: is7s4nhkr3xhcpisd1iqmz8hdth8cdm1 X-Rspamd-Queue-Id: 2FA0220019E2 X-HE-Tag: 1626326083-13784 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves a few calls to compound_head(), including one in filemap_update_page(). Shrinks the kernel by 78 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index dd54a52f8e84..18b21d16c9de 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2355,9 +2355,8 @@ static bool filemap_range_uptodate(struct address_space *mapping, static int filemap_update_page(struct kiocb *iocb, struct address_space *mapping, struct iov_iter *iter, - struct page *page) + struct folio *folio) { - struct folio *folio = page_folio(page); int error; if (!folio_trylock(folio)) { @@ -2426,13 +2425,13 @@ static int filemap_create_folio(struct file *file, } static int filemap_readahead(struct kiocb *iocb, struct file *file, - struct address_space *mapping, struct page *page, + struct address_space *mapping, struct folio *folio, pgoff_t last_index) { if (iocb->ki_flags & IOCB_NOIO) return -EAGAIN; - page_cache_async_readahead(mapping, &file->f_ra, file, page, - page->index, last_index - page->index); + page_cache_async_readahead(mapping, &file->f_ra, file, &folio->page, + folio->index, last_index - folio->index); return 0; } @@ -2444,7 +2443,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, struct file_ra_state *ra = &filp->f_ra; pgoff_t index = iocb->ki_pos >> PAGE_SHIFT; pgoff_t last_index; - struct page *page; + struct folio *folio; int err = 0; last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE); @@ -2470,16 +2469,16 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, return err; } - page = pvec->pages[pagevec_count(pvec) - 1]; - if (PageReadahead(page)) { - err = filemap_readahead(iocb, filp, mapping, page, last_index); + folio = page_folio(pvec->pages[pagevec_count(pvec) - 1]); + if (folio_test_readahead(folio)) { + err = filemap_readahead(iocb, filp, mapping, folio, last_index); if (err) goto err; } - if (!PageUptodate(page)) { + if (!folio_test_uptodate(folio)) { if ((iocb->ki_flags & IOCB_WAITQ) && pagevec_count(pvec) > 1) iocb->ki_flags |= IOCB_NOWAIT; - err = filemap_update_page(iocb, mapping, iter, page); + err = filemap_update_page(iocb, mapping, iter, folio); if (err) goto err; } @@ -2487,7 +2486,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter, return 0; err: if (err < 0) - put_page(page); + folio_put(folio); if (likely(--pvec->nr)) return 0; if (err == AOP_TRUNCATED_PAGE) From patchwork Thu Jul 15 03:36:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A306BC07E96 for ; Thu, 15 Jul 2021 05:15:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4EACD60241 for ; Thu, 15 Jul 2021 05:15:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EACD60241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A3B508D007E; Thu, 15 Jul 2021 01:15:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A11848D0065; Thu, 15 Jul 2021 01:15:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B4A28D007E; Thu, 15 Jul 2021 01:15:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 65F998D0065 for ; Thu, 15 Jul 2021 01:15:22 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 548CE231BB for ; Thu, 15 Jul 2021 05:15:21 +0000 (UTC) X-FDA: 78363658842.25.138CD10 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 0E906D0000B2 for ; Thu, 15 Jul 2021 05:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+QMiuiEpJ0BM6oIxbdBSrEZvo8z90xQHIFFLebxeb98=; b=VeoDl4cEmeagzxODdgRGgKDWqc 86K+UdNRdGmOgAGLmj2qeVpromVgHSQptZ20Jj24x0LJzlb83WDLEnewwLIUn3RtWuFlknumPdr2D dVRo2vOHaVoax7IYGTnZAgX4EZ87vROy3IY5dk6QPOYqMEr30WuuXq0O2pQ8aqbXBX4XsipfMwknn YhIDNBI0maKfZbTje0m3LvwCfqxRi6uey1TjTodOdHyaCJRZ94rGktv86h2RgkgB9PscmJVNREN1I k/ySskM4X1fcwPMc3vJqaBm/2RATFVz4xQioYPwHgM+qBrakycDrh8nf7+GP474MzsZU43N8hXpBI zPcpHjjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tgu-0030Mg-AB; Thu, 15 Jul 2021 05:14:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 120/138] mm/filemap: Convert page_cache_delete_batch to folios Date: Thu, 15 Jul 2021 04:36:46 +0100 Message-Id: <20210715033704.692967-121-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VeoDl4cE; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: jj7adsdu38y68tib1d8ysj4p6wrgozt7 X-Rspamd-Queue-Id: 0E906D0000B2 X-HE-Tag: 1626326120-476433 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Saves one call to compound_head() and reduces text size by 15 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 18b21d16c9de..8510f67dc749 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -287,15 +287,15 @@ static void page_cache_delete_batch(struct address_space *mapping, XA_STATE(xas, &mapping->i_pages, pvec->pages[0]->index); int total_pages = 0; int i = 0; - struct page *page; + struct folio *folio; mapping_set_update(&xas, mapping); - xas_for_each(&xas, page, ULONG_MAX) { + xas_for_each(&xas, folio, ULONG_MAX) { if (i >= pagevec_count(pvec)) break; /* A swap/dax/shadow entry got inserted? Skip it. */ - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; /* * A page got inserted in our range? Skip it. We have our @@ -304,16 +304,16 @@ static void page_cache_delete_batch(struct address_space *mapping, * means our page has been removed, which shouldn't be * possible because we're holding the PageLock. */ - if (page != pvec->pages[i]) { - VM_BUG_ON_PAGE(page->index > pvec->pages[i]->index, - page); + if (&folio->page != pvec->pages[i]) { + VM_BUG_ON_FOLIO(folio->index > + pvec->pages[i]->index, folio); continue; } - WARN_ON_ONCE(!PageLocked(page)); + WARN_ON_ONCE(!folio_test_locked(folio)); - if (page->index == xas.xa_index) - page->mapping = NULL; + if (folio->index == xas.xa_index) + folio->mapping = NULL; /* Leave page->index set: truncation lookup relies on it */ /* @@ -321,7 +321,8 @@ static void page_cache_delete_batch(struct address_space *mapping, * page or the index is of the last sub-page of this compound * page. */ - if (page->index + compound_nr(page) - 1 == xas.xa_index) + if (folio->index + folio_nr_pages(folio) - 1 == + xas.xa_index) i++; xas_store(&xas, NULL); total_pages++; From patchwork Thu Jul 15 03:36:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90F56C47E48 for ; Thu, 15 Jul 2021 05:16:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1AF2E60241 for ; Thu, 15 Jul 2021 05:16:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1AF2E60241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6DDFE8D007F; Thu, 15 Jul 2021 01:16:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B4CF8D0065; Thu, 15 Jul 2021 01:16:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57D608D007F; Thu, 15 Jul 2021 01:16:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 32A5D8D0065 for ; Thu, 15 Jul 2021 01:16:34 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1E74F184B5927 for ; Thu, 15 Jul 2021 05:16:33 +0000 (UTC) X-FDA: 78363661866.18.2EF1DEE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id E1AC610072FC for ; Thu, 15 Jul 2021 05:16:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E4o291CiEkirXhlNU9qhgU1unG2pTpkpRkPYhVT13Jw=; b=hMx7mTzBw8NvkBPVXiIoTms9Np Qcbuveu5Ui3/uJEgMl4C+IRrRd67cr8V6ditJNZYTAOqUujZvcxpXVXdVAdxePjvHr26aIw4zpRii IYN5LwhWPvr//u/j3SxRVl3QKX08bhslxEspnQay0FmtMHwzth3Nk/LQ8j21yrY5jooR0/mmE0UoG lFQvxptbMLAI/wlzy6LZ/zrzVueYVz862OpRj0e7Iz1CRfBIoJKr+ciR4SYWE4oRcDb7Jk9Pj4jgP zSAKaQhC3SVtizLdl9vWsDwRyvrHR3f0kYlvI/mrxrzhGE6RBK9BTtsVPIM8ToPFD6faCzIVGpBKA KQ34HDkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3thq-0030QK-2N; Thu, 15 Jul 2021 05:14:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 121/138] mm/filemap: Remove PageHWPoison check from next_uptodate_page() Date: Thu, 15 Jul 2021 04:36:47 +0100 Message-Id: <20210715033704.692967-122-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E1AC610072FC X-Stat-Signature: o5tz6fd34zqar1s7bmmxxaus7wfwonfe Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hMx7mTzB; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626326192-871853 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages are individually marked as suffering from hardware poisoning. Checking that the head page is not hardware poisoned doesn't make sense; we might be after a subpage. We check each page individually before we use it, so this was an optimisation gone wrong. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 8510f67dc749..717b0d262306 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3127,8 +3127,6 @@ static struct page *next_uptodate_page(struct page *page, goto skip; if (!PageUptodate(page) || PageReadahead(page)) goto skip; - if (PageHWPoison(page)) - goto skip; if (!trylock_page(page)) goto skip; if (page->mapping != mapping) From patchwork Thu Jul 15 03:36:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCB3AC07E96 for ; Thu, 15 Jul 2021 05:16:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7BE6F60241 for ; Thu, 15 Jul 2021 05:16:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BE6F60241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D45F68D0080; Thu, 15 Jul 2021 01:16:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5BE8D0065; Thu, 15 Jul 2021 01:16:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBDD88D0080; Thu, 15 Jul 2021 01:16:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 989718D0065 for ; Thu, 15 Jul 2021 01:16:57 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 909B51AEE0 for ; Thu, 15 Jul 2021 05:16:56 +0000 (UTC) X-FDA: 78363662832.09.CBF61A9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 4AD186001A80 for ; Thu, 15 Jul 2021 05:16:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A3fDsifX2HweSey8qgo1JGC+yGw4pXjaGc1TRorG4VY=; b=Qqtj4BqJne5QKkrZcxxk2BWSJg dw4eOLXguP0WMWShLnQeVlZuwwIL0jK5QP1waHM7LYE5qu2M5E0B3EvsG/ZDnhpKHtljn+ZokxEAd TWTLls5w96wGkw0XQgI07Ss/ryG4icjyLjYy3lkRuKrk7XlmXmG/Y33V9rpcpGmm/3a3ifXpywStV A9l20Jlua7h9Y9ds1AxeOIB7ugMrEBJ1iOgEGidgeQRjES7x93croNfa4y12tCbM1iSEp33BWH6Ym Q8p4WQOj3Qb8RpVw8OT4qfCe1fCEkhtieJUbZjrOtRdRjj2MaRWO45PaxL8MPqecU8+zUKWroshBh 5kP1bctw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tiR-0030St-Tr; Thu, 15 Jul 2021 05:15:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 122/138] mm/filemap: Use folios in next_uptodate_page Date: Thu, 15 Jul 2021 04:36:48 +0100 Message-Id: <20210715033704.692967-123-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qqtj4BqJ; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: wr6kaosnb5z9dfdxuost33jao9kdnzun X-Rspamd-Queue-Id: 4AD186001A80 X-Rspamd-Server: rspam01 X-HE-Tag: 1626326216-790605 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves 105 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 717b0d262306..545323a77c1c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3105,43 +3105,43 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) return false; } -static struct page *next_uptodate_page(struct page *page, +static struct page *next_uptodate_page(struct folio *folio, struct address_space *mapping, struct xa_state *xas, pgoff_t end_pgoff) { unsigned long max_idx; do { - if (!page) + if (!folio) return NULL; - if (xas_retry(xas, page)) + if (xas_retry(xas, folio)) continue; - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; - if (PageLocked(page)) + if (folio_test_locked(folio)) continue; - if (!page_cache_get_speculative(page)) + if (!folio_try_get_rcu(folio)) continue; /* Has the page moved or been split? */ - if (unlikely(page != xas_reload(xas))) + if (unlikely(folio != xas_reload(xas))) goto skip; - if (!PageUptodate(page) || PageReadahead(page)) + if (!folio_test_uptodate(folio) || folio_test_readahead(folio)) goto skip; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto skip; - if (page->mapping != mapping) + if (folio->mapping != mapping) goto unlock; - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) goto unlock; max_idx = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); if (xas->xa_index >= max_idx) goto unlock; - return page; + return &folio->page; unlock: - unlock_page(page); + folio_unlock(folio); skip: - put_page(page); - } while ((page = xas_next_entry(xas, end_pgoff)) != NULL); + folio_put(folio); + } while ((folio = xas_next_entry(xas, end_pgoff)) != NULL); return NULL; } From patchwork Thu Jul 15 03:36:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92C96C07E96 for ; Thu, 15 Jul 2021 05:17:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4421960241 for ; Thu, 15 Jul 2021 05:17:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4421960241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 99ADF8D0082; Thu, 15 Jul 2021 01:17:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94ADF8D0065; Thu, 15 Jul 2021 01:17:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8144E8D0082; Thu, 15 Jul 2021 01:17:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 5F47E8D0065 for ; Thu, 15 Jul 2021 01:17:52 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 54CA8184B83CB for ; Thu, 15 Jul 2021 05:17:51 +0000 (UTC) X-FDA: 78363665142.05.75C0535 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 17CB510000AC for ; Thu, 15 Jul 2021 05:17:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=flIpsNI9pEz0i4v6s1nQcu/zmXv7ucD3WLL0ysBrFV4=; b=MTaL5QmqTwvXLYRjXInQhfECKn LMYLQGrdCZ15BADiUIQCt8S7IEBMtR15uYRMiLvIwB0TjigHmJ6Ng1Wi1Mt+BRaX3yL2RIvdiIESY T9ah2IItnexdXvQbQodWTwoRo3g70u0aZy5fgBmcJPO4JP/XCtXptLepQMCT5OAA3kNRYewm1N+KO 7TNi+WJx1OxbTJFpacmyKisWuHvz2l253ajLotJFoKU6079rOSCEGhDu0o3EU+0tdxXqbNafxcEXo 7I8Spcp1dH5N6hOHmII1WGW6l6YpcPcPIEfeK0PuGSmHx6YQ5j70VMZHxH86Ew4qhJw1VlxavdLVW 58Mu0avQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tji-0030VQ-Ao; Thu, 15 Jul 2021 05:16:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 123/138] mm/filemap: Use a folio in filemap_map_pages Date: Thu, 15 Jul 2021 04:36:49 +0100 Message-Id: <20210715033704.692967-124-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MTaL5Qmq; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: jsikh4uwwp77peooxsj97uip785bu3zr X-Rspamd-Queue-Id: 17CB510000AC X-HE-Tag: 1626326270-473121 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Saves 61 bytes due to fewer calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 545323a77c1c..1c0c2663c57d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3105,7 +3105,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page) return false; } -static struct page *next_uptodate_page(struct folio *folio, +static struct folio *next_uptodate_page(struct folio *folio, struct address_space *mapping, struct xa_state *xas, pgoff_t end_pgoff) { @@ -3136,7 +3136,7 @@ static struct page *next_uptodate_page(struct folio *folio, max_idx = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); if (xas->xa_index >= max_idx) goto unlock; - return &folio->page; + return folio; unlock: folio_unlock(folio); skip: @@ -3146,7 +3146,7 @@ static struct page *next_uptodate_page(struct folio *folio, return NULL; } -static inline struct page *first_map_page(struct address_space *mapping, +static inline struct folio *first_map_page(struct address_space *mapping, struct xa_state *xas, pgoff_t end_pgoff) { @@ -3154,7 +3154,7 @@ static inline struct page *first_map_page(struct address_space *mapping, mapping, xas, end_pgoff); } -static inline struct page *next_map_page(struct address_space *mapping, +static inline struct folio *next_map_page(struct address_space *mapping, struct xa_state *xas, pgoff_t end_pgoff) { @@ -3171,16 +3171,17 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t last_pgoff = start_pgoff; unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); - struct page *head, *page; + struct folio *folio; + struct page *page; unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; rcu_read_lock(); - head = first_map_page(mapping, &xas, end_pgoff); - if (!head) + folio = first_map_page(mapping, &xas, end_pgoff); + if (!folio) goto out; - if (filemap_map_pmd(vmf, head)) { + if (filemap_map_pmd(vmf, &folio->page)) { ret = VM_FAULT_NOPAGE; goto out; } @@ -3188,7 +3189,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { - page = find_subpage(head, xas.xa_index); + page = folio_file_page(folio, xas.xa_index); if (PageHWPoison(page)) goto unlock; @@ -3209,12 +3210,12 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, do_set_pte(vmf, page, addr); /* no need to invalidate: a not-present page won't be cached */ update_mmu_cache(vma, addr, vmf->pte); - unlock_page(head); + folio_unlock(folio); continue; unlock: - unlock_page(head); - put_page(head); - } while ((head = next_map_page(mapping, &xas, end_pgoff)) != NULL); + folio_unlock(folio); + folio_put(folio); + } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); pte_unmap_unlock(vmf->pte, vmf->ptl); out: rcu_read_unlock(); From patchwork Thu Jul 15 03:36:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB36C1B08C for ; Thu, 15 Jul 2021 05:18:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21ECD60241 for ; Thu, 15 Jul 2021 05:18:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21ECD60241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7600A8D0083; Thu, 15 Jul 2021 01:18:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 727668D0065; Thu, 15 Jul 2021 01:18:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A1CA8D0083; Thu, 15 Jul 2021 01:18:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 38BC08D0065 for ; Thu, 15 Jul 2021 01:18:03 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8DFD118019F16 for ; Thu, 15 Jul 2021 05:18:00 +0000 (UTC) X-FDA: 78363665520.13.B0E1A3A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 5198FB00019E for ; Thu, 15 Jul 2021 05:18:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XSVu9BD9UXyayvt5uQ2+uc7bv/kdPYe5szu7lOLynCY=; b=u9j66Nhyd2T2mKoEpFk4ZTqySc s1JyoYspmyrcUimIIwLwMCxNiAHEhTA9cfqzkGZURBK9Osb0U5Gic6j+mr6R82/8m5Vp8bcyPDZ1/ SparFb5JuiJC5GjoIh8W4TZqXYzPjBAeGUOW83k3X3x9wpi4QMhr2uWzrDM3AG0i5aNtpvbDpnW+0 TJPa5xDOxxxLNmt+yao4bIk2RgfYJ74js758tYofr+1HPwVsV4O5vEmggfHp9/u9uMLcZMDAYGXio AO5EuvX+tyZVg2GWsW7PAxzuCuLnZsousQCfQGdDbfgnEls5TDaYsV9fgo5MXfLfEjSsjvvZW4LF2 QHOZxmfA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tkA-0030cA-Bz; Thu, 15 Jul 2021 05:17:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 124/138] fs: Convert vfs_dedupe_file_range_compare to folios Date: Thu, 15 Jul 2021 04:36:50 +0100 Message-Id: <20210715033704.692967-125-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5198FB00019E X-Stat-Signature: rwwe7myxte84ax9ra5f3xcwfbjexmxcs Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=u9j66Nhy; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626326280-465257 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We still only operate on a single page of data at a time due to using kmap(). A more complex implementation would work on each page in a folio, but it's not clear that such a complex implementation would be worthwhile. Signed-off-by: Matthew Wilcox (Oracle) --- fs/remap_range.c | 116 ++++++++++++++++++++++------------------------- 1 file changed, 55 insertions(+), 61 deletions(-) diff --git a/fs/remap_range.c b/fs/remap_range.c index e4a5fdd7ad7b..886e6ed2c6c2 100644 --- a/fs/remap_range.c +++ b/fs/remap_range.c @@ -158,41 +158,41 @@ static int generic_remap_check_len(struct inode *inode_in, } /* Read a page's worth of file data into the page cache. */ -static struct page *vfs_dedupe_get_page(struct inode *inode, loff_t offset) +static struct folio *vfs_dedupe_get_folio(struct inode *inode, loff_t pos) { - struct page *page; + struct folio *folio; - page = read_mapping_page(inode->i_mapping, offset >> PAGE_SHIFT, NULL); - if (IS_ERR(page)) - return page; - if (!PageUptodate(page)) { - put_page(page); + folio = read_mapping_folio(inode->i_mapping, pos >> PAGE_SHIFT, NULL); + if (IS_ERR(folio)) + return folio; + if (!folio_test_uptodate(folio)) { + folio_put(folio); return ERR_PTR(-EIO); } - return page; + return folio; } /* - * Lock two pages, ensuring that we lock in offset order if the pages are from - * the same file. + * Lock two folios, ensuring that we lock in offset order if the folios + * are from the same file. */ -static void vfs_lock_two_pages(struct page *page1, struct page *page2) +static void vfs_lock_two_folios(struct folio *folio1, struct folio *folio2) { /* Always lock in order of increasing index. */ - if (page1->index > page2->index) - swap(page1, page2); + if (folio1->index > folio2->index) + swap(folio1, folio2); - lock_page(page1); - if (page1 != page2) - lock_page(page2); + folio_lock(folio1); + if (folio1 != folio2) + folio_lock(folio2); } -/* Unlock two pages, being careful not to unlock the same page twice. */ -static void vfs_unlock_two_pages(struct page *page1, struct page *page2) +/* Unlock two folios, being careful not to unlock the same folio twice. */ +static void vfs_unlock_two_folios(struct folio *folio1, struct folio *folio2) { - unlock_page(page1); - if (page1 != page2) - unlock_page(page2); + folio_unlock(folio1); + if (folio1 != folio2) + folio_unlock(folio2); } /* @@ -200,77 +200,71 @@ static void vfs_unlock_two_pages(struct page *page1, struct page *page2) * Caller must have locked both inodes to prevent write races. */ static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff, - struct inode *dest, loff_t destoff, + struct inode *dest, loff_t dstoff, loff_t len, bool *is_same) { - loff_t src_poff; - loff_t dest_poff; - void *src_addr; - void *dest_addr; - struct page *src_page; - struct page *dest_page; - loff_t cmp_len; - bool same; - int error; - - error = -EINVAL; - same = true; + bool same = true; + int error = -EINVAL; + while (len) { - src_poff = srcoff & (PAGE_SIZE - 1); - dest_poff = destoff & (PAGE_SIZE - 1); - cmp_len = min(PAGE_SIZE - src_poff, - PAGE_SIZE - dest_poff); + struct folio *src_folio, *dst_folio; + void *src_addr, *dst_addr; + loff_t cmp_len = min(PAGE_SIZE - offset_in_page(srcoff), + PAGE_SIZE - offset_in_page(dstoff)); + cmp_len = min(cmp_len, len); if (cmp_len <= 0) goto out_error; - src_page = vfs_dedupe_get_page(src, srcoff); - if (IS_ERR(src_page)) { - error = PTR_ERR(src_page); + src_folio = vfs_dedupe_get_folio(src, srcoff); + if (IS_ERR(src_folio)) { + error = PTR_ERR(src_folio); goto out_error; } - dest_page = vfs_dedupe_get_page(dest, destoff); - if (IS_ERR(dest_page)) { - error = PTR_ERR(dest_page); - put_page(src_page); + dst_folio = vfs_dedupe_get_folio(dest, dstoff); + if (IS_ERR(dst_folio)) { + error = PTR_ERR(dst_folio); + folio_put(src_folio); goto out_error; } - vfs_lock_two_pages(src_page, dest_page); + vfs_lock_two_folios(src_folio, dst_folio); /* - * Now that we've locked both pages, make sure they're still + * Now that we've locked both folios, make sure they're still * mapped to the file data we're interested in. If not, * someone is invalidating pages on us and we lose. */ - if (!PageUptodate(src_page) || !PageUptodate(dest_page) || - src_page->mapping != src->i_mapping || - dest_page->mapping != dest->i_mapping) { + if (!folio_test_uptodate(src_folio) || !folio_test_uptodate(dst_folio) || + src_folio->mapping != src->i_mapping || + dst_folio->mapping != dest->i_mapping) { same = false; goto unlock; } - src_addr = kmap_atomic(src_page); - dest_addr = kmap_atomic(dest_page); + src_addr = kmap_local_folio(src_folio, + offset_in_folio(src_folio, srcoff)); + dst_addr = kmap_local_folio(dst_folio, + offset_in_folio(dst_folio, dstoff)); - flush_dcache_page(src_page); - flush_dcache_page(dest_page); + flush_dcache_folio(src_folio); + flush_dcache_folio(dst_folio); - if (memcmp(src_addr + src_poff, dest_addr + dest_poff, cmp_len)) + if (memcmp(src_addr, dst_addr, cmp_len)) same = false; - kunmap_atomic(dest_addr); - kunmap_atomic(src_addr); + kunmap_local(dst_addr); + kunmap_local(src_addr); unlock: - vfs_unlock_two_pages(src_page, dest_page); - put_page(dest_page); - put_page(src_page); + vfs_unlock_two_folios(src_folio, dst_folio); + folio_put(dst_folio); + folio_put(src_folio); if (!same) break; srcoff += cmp_len; - destoff += cmp_len; + dstoff += cmp_len; len -= cmp_len; } From patchwork Thu Jul 15 03:36:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66245C47E4D for ; Thu, 15 Jul 2021 05:18:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1892D613CA for ; Thu, 15 Jul 2021 05:18:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1892D613CA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 68CB58D0084; Thu, 15 Jul 2021 01:18:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 663338D0065; Thu, 15 Jul 2021 01:18:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52C028D0084; Thu, 15 Jul 2021 01:18:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 31D2B8D0065 for ; Thu, 15 Jul 2021 01:18:59 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 242171DE6D for ; Thu, 15 Jul 2021 05:18:58 +0000 (UTC) X-FDA: 78363667956.10.61603F1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 95A77E0016B0 for ; Thu, 15 Jul 2021 05:18:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3p0H2YH0IM+dwsXBKLYPVG8H0GzxvL+G1ImKb+htvjk=; b=KJ2mfgzx1WUZzAEUZkj9TEyMH2 4AbcMe7TpVhsxEsh2RdacH4yQPYV2wFZgcdTnHPh0HuRIn/lqdrQYOCAUrDQYJnPBkTa2PkgbLs1b CbVBXw0FI4Lg0H5NrhJIp8j/SYjNbcKhmdrRIoEvtjyQuZoNjY6erzHOIkHVwEzAv+Z2m5Su5+gML /AyifEPoJg1Ge8kjbbqJQFXo+KP+Wo8HOn7rPaYXgl8HtIGdcozOqdy3YvyWT7PW+NVnbVOg6pBSd htJQw/mtpCaopbsZvFnnqJZRzohqrV5ZWnVsh3e57kBl0TAZUg01Rmvi7smAkphufpA5Sqm+vyrno 7iILuSWQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tki-0030eY-Dg; Thu, 15 Jul 2021 05:17:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Jan Kara , William Kucharski Subject: [PATCH v14 125/138] mm/truncate,shmem: Handle truncates that split THPs Date: Thu, 15 Jul 2021 04:36:51 +0100 Message-Id: <20210715033704.692967-126-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=KJ2mfgzx; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: mbusnatzubsgpjdrxfw5t8pgs5b3w16j X-Rspamd-Queue-Id: 95A77E0016B0 X-HE-Tag: 1626326337-300896 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Handle THP splitting in the parts of the truncation functions which already handle partial pages. Factor all that code out into a new function called truncate_inode_partial_page(). We lose the easy 'bail out' path if a truncate or hole punch is entirely within a single page. We can add some more complex logic to restore the optimisation if it proves to be worthwhile. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara Reviewed-by: William Kucharski --- mm/internal.h | 1 + mm/shmem.c | 97 +++++++++++++------------------------ mm/truncate.c | 130 +++++++++++++++++++++++++++++++++----------------- 3 files changed, 120 insertions(+), 108 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 0910efec5821..3c0c807eddc6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -70,6 +70,7 @@ static inline void force_page_cache_readahead(struct address_space *mapping, unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices); +bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end); /** * folio_evictable - Test whether a folio is evictable. diff --git a/mm/shmem.c b/mm/shmem.c index 2fd75b4d4974..337680a01f2a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -857,32 +857,6 @@ void shmem_unlock_mapping(struct address_space *mapping) } } -/* - * Check whether a hole-punch or truncation needs to split a huge page, - * returning true if no split was required, or the split has been successful. - * - * Eviction (or truncation to 0 size) should never need to split a huge page; - * but in rare cases might do so, if shmem_undo_range() failed to trylock on - * head, and then succeeded to trylock on tail. - * - * A split can only succeed when there are no additional references on the - * huge page: so the split below relies upon find_get_entries() having stopped - * when it found a subpage of the huge page, without getting further references. - */ -static bool shmem_punch_compound(struct page *page, pgoff_t start, pgoff_t end) -{ - if (!PageTransCompound(page)) - return true; - - /* Just proceed to delete a huge page wholly within the range punched */ - if (PageHead(page) && - page->index >= start && page->index + HPAGE_PMD_NR <= end) - return true; - - /* Try to split huge page, so we can truly punch the hole or truncate */ - return split_huge_page(page) >= 0; -} - /* * Remove range of pages and swap entries from page cache, and free them. * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate. @@ -894,13 +868,13 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, struct shmem_inode_info *info = SHMEM_I(inode); pgoff_t start = (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT; pgoff_t end = (lend + 1) >> PAGE_SHIFT; - unsigned int partial_start = lstart & (PAGE_SIZE - 1); - unsigned int partial_end = (lend + 1) & (PAGE_SIZE - 1); struct pagevec pvec; pgoff_t indices[PAGEVEC_SIZE]; + struct page *page; long nr_swaps_freed = 0; pgoff_t index; int i; + bool partial_end; if (lend == -1) end = -1; /* unsigned, so actually very big */ @@ -910,7 +884,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end && find_lock_entries(mapping, index, end - 1, &pvec, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + page = pvec.pages[i]; index = indices[i]; @@ -933,33 +907,37 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, index++; } - if (partial_start) { - struct page *page = NULL; - shmem_getpage(inode, start - 1, &page, SGP_READ); - if (page) { - unsigned int top = PAGE_SIZE; - if (start > end) { - top = partial_end; - partial_end = 0; - } - zero_user_segment(page, partial_start, top); - set_page_dirty(page); - unlock_page(page); - put_page(page); + partial_end = ((lend + 1) % PAGE_SIZE) > 0; + page = NULL; + shmem_getpage(inode, lstart >> PAGE_SHIFT, &page, SGP_READ); + if (page) { + bool same_page; + + page = compound_head(page); + same_page = lend < page_offset(page) + thp_size(page); + if (same_page) + partial_end = false; + set_page_dirty(page); + if (!truncate_inode_partial_page(page, lstart, lend)) { + start = page->index + thp_nr_pages(page); + if (same_page) + end = page->index; } + unlock_page(page); + put_page(page); + page = NULL; } - if (partial_end) { - struct page *page = NULL; + + if (partial_end) shmem_getpage(inode, end, &page, SGP_READ); - if (page) { - zero_user_segment(page, 0, partial_end); - set_page_dirty(page); - unlock_page(page); - put_page(page); - } + if (page) { + page = compound_head(page); + set_page_dirty(page); + if (!truncate_inode_partial_page(page, lstart, lend)) + end = page->index; + unlock_page(page); + put_page(page); } - if (start >= end) - return; index = start; while (index < end) { @@ -975,7 +953,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, continue; } for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + page = pvec.pages[i]; index = indices[i]; if (xa_is_value(page)) { @@ -1000,18 +978,9 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, break; } VM_BUG_ON_PAGE(PageWriteback(page), page); - if (shmem_punch_compound(page, start, end)) - truncate_inode_page(mapping, page); - else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - /* Wipe the page and don't get stuck */ - clear_highpage(page); - flush_dcache_page(page); - set_page_dirty(page); - if (index < - round_up(start, HPAGE_PMD_NR)) - start = index + 1; - } + truncate_inode_page(mapping, page); } + index = page->index + thp_nr_pages(page) - 1; unlock_page(page); } pagevec_remove_exceptionals(&pvec); diff --git a/mm/truncate.c b/mm/truncate.c index 234ddd879caa..b8c9d2fbd9b5 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -220,6 +220,58 @@ int truncate_inode_page(struct address_space *mapping, struct page *page) return 0; } +/* + * Handle partial (transparent) pages. The page may be entirely within the + * range if a split has raced with us. If not, we zero the part of the + * page that's within the [start, end] range, and then split the page if + * it's a THP. split_page_range() will discard pages which now lie beyond + * i_size, and we rely on the caller to discard pages which lie within a + * newly created hole. + * + * Returns false if THP splitting failed so the caller can avoid + * discarding the entire page which is stubbornly unsplit. + */ +bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end) +{ + loff_t pos = page_offset(page); + unsigned int offset, length; + + if (pos < start) + offset = start - pos; + else + offset = 0; + length = thp_size(page); + if (pos + length <= (u64)end) + length = length - offset; + else + length = end + 1 - pos - offset; + + wait_on_page_writeback(page); + if (length == thp_size(page)) { + truncate_inode_page(page->mapping, page); + return true; + } + + /* + * We may be zeroing pages we're about to discard, but it avoids + * doing a complex calculation here, and then doing the zeroing + * anyway if the page split fails. + */ + zero_user(page, offset, length); + + cleancache_invalidate_page(page->mapping, page); + if (page_has_private(page)) + do_invalidatepage(page, offset, length); + if (!PageTransHuge(page)) + return true; + if (split_huge_page(page) == 0) + return true; + if (PageDirty(page)) + return false; + truncate_inode_page(page->mapping, page); + return true; +} + /* * Used to get rid of pages on hardware memory corruption. */ @@ -255,6 +307,13 @@ int invalidate_inode_page(struct page *page) return invalidate_complete_page(mapping, page); } +static inline struct page *find_lock_head(struct address_space *mapping, + pgoff_t index) +{ + struct folio *folio = __filemap_get_folio(mapping, index, FGP_LOCK, 0); + return &folio->page; +} + /** * truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets * @mapping: mapping to truncate @@ -284,20 +343,16 @@ void truncate_inode_pages_range(struct address_space *mapping, { pgoff_t start; /* inclusive */ pgoff_t end; /* exclusive */ - unsigned int partial_start; /* inclusive */ - unsigned int partial_end; /* exclusive */ struct pagevec pvec; pgoff_t indices[PAGEVEC_SIZE]; pgoff_t index; int i; + struct page * page; + bool partial_end; if (mapping_empty(mapping)) goto out; - /* Offsets within partial pages */ - partial_start = lstart & (PAGE_SIZE - 1); - partial_end = (lend + 1) & (PAGE_SIZE - 1); - /* * 'start' and 'end' always covers the range of pages to be fully * truncated. Partial pages are covered with 'partial_start' at the @@ -330,48 +385,35 @@ void truncate_inode_pages_range(struct address_space *mapping, cond_resched(); } - if (partial_start) { - struct page *page = find_lock_page(mapping, start - 1); - if (page) { - unsigned int top = PAGE_SIZE; - if (start > end) { - /* Truncation within a single page */ - top = partial_end; - partial_end = 0; - } - wait_on_page_writeback(page); - zero_user_segment(page, partial_start, top); - cleancache_invalidate_page(mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, partial_start, - top - partial_start); - unlock_page(page); - put_page(page); + partial_end = ((lend + 1) % PAGE_SIZE) > 0; + page = find_lock_head(mapping, lstart >> PAGE_SHIFT); + if (page) { + bool same_page = lend < page_offset(page) + thp_size(page); + if (same_page) + partial_end = false; + if (!truncate_inode_partial_page(page, lstart, lend)) { + start = page->index + thp_nr_pages(page); + if (same_page) + end = page->index; } + unlock_page(page); + put_page(page); + page = NULL; } - if (partial_end) { - struct page *page = find_lock_page(mapping, end); - if (page) { - wait_on_page_writeback(page); - zero_user_segment(page, 0, partial_end); - cleancache_invalidate_page(mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, 0, - partial_end); - unlock_page(page); - put_page(page); - } + + if (partial_end) + page = find_lock_head(mapping, end); + if (page) { + if (!truncate_inode_partial_page(page, lstart, lend)) + end = page->index; + unlock_page(page); + put_page(page); } - /* - * If the truncation happened within a single page no pages - * will be released, just zeroed, so we can bail out now. - */ - if (start >= end) - goto out; index = start; - for ( ; ; ) { + while (index < end) { cond_resched(); + if (!find_get_entries(mapping, index, end - 1, &pvec, indices)) { /* If all gone from start onwards, we're done */ @@ -383,7 +425,7 @@ void truncate_inode_pages_range(struct address_space *mapping, } for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + page = pvec.pages[i]; /* We rely upon deletion not changing page->index */ index = indices[i]; @@ -392,7 +434,7 @@ void truncate_inode_pages_range(struct address_space *mapping, continue; lock_page(page); - WARN_ON(page_to_index(page) != index); + index = page->index + thp_nr_pages(page) - 1; wait_on_page_writeback(page); truncate_inode_page(mapping, page); unlock_page(page); From patchwork Thu Jul 15 03:36:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82591C1B08C for ; Thu, 15 Jul 2021 05:19:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C87916100A for ; Thu, 15 Jul 2021 05:19:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C87916100A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 26C9D8D0085; Thu, 15 Jul 2021 01:19:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2437F8D0065; Thu, 15 Jul 2021 01:19:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E46A8D0085; Thu, 15 Jul 2021 01:19:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id DF2A68D0065 for ; Thu, 15 Jul 2021 01:19:52 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CF9A522BF7 for ; Thu, 15 Jul 2021 05:19:51 +0000 (UTC) X-FDA: 78363670182.25.F195F79 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 859E9B0000A7 for ; Thu, 15 Jul 2021 05:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yjcEYbnvFGiVk+7puh1H29FSvNpW0YK7ifZztKTx7lg=; b=FQxNhg7Wn0LNyLYKbKRdKOfilp /AW4j46ohSUsNlXkocsnQLLGMkm7l6LoNe3yq5kbSweuE++WQ/HK4mTx+1kyQFjOJMp9Br8KnDdJT hQ+adeSK5OgiS5rRHfjqBdNzyidQjo9gQEQUu5KL6B/Uv3uxnjedVKDBpiSauJau7lvlRxeBj/Vao 4RLsF52P7NIcwyQOLEox4mhmZHBViypmXPnvRR+4aadcNDLRANtRll2QpurnfUqNTumfeZN6mSkoT 2f4jcmh9/xejER8ZFwtHaqp7fjAKafdVPYNzd82XV7whgJ6Qd/ZS3dhpOc4W2JTtPcZglDSULvEKk ZqsK5DXQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tlM-0030h0-7z; Thu, 15 Jul 2021 05:18:38 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Jan Kara , William Kucharski Subject: [PATCH v14 126/138] mm/filemap: Return only head pages from find_get_entries Date: Thu, 15 Jul 2021 04:36:52 +0100 Message-Id: <20210715033704.692967-127-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 859E9B0000A7 X-Stat-Signature: wrby8zmr1orq3arkc9n44zyfhjpb4bz4 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FQxNhg7W; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626326391-393235 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers now expect head (and base) pages, and can handle multiple head pages in a single batch, so make find_get_entries() behave that way. Also take the opportunity to make it use the pagevec infrastructure instead of open-coding how pvecs behave. This has the side-effect of being able to append to a pagevec with existing contents, although we don't make use of that functionality anywhere yet. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jan Kara Reviewed-by: William Kucharski --- include/linux/pagemap.h | 2 -- mm/filemap.c | 40 ++++++++++------------------------------ mm/internal.h | 2 ++ 3 files changed, 12 insertions(+), 32 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 842d130fd6d3..bf8e978a48f2 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -502,8 +502,6 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index) return head + (index & (thp_nr_pages(head) - 1)); } -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, - pgoff_t end, struct pagevec *pvec, pgoff_t *indices); unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, pgoff_t end, unsigned int nr_pages, struct page **pages); diff --git a/mm/filemap.c b/mm/filemap.c index 1c0c2663c57d..20434d7bdad8 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1955,49 +1955,29 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, * the mapping. The entries are placed in @pvec. find_get_entries() * takes a reference on any actual pages it returns. * - * The search returns a group of mapping-contiguous page cache entries - * with ascending indexes. There may be holes in the indices due to - * not-present pages. + * The entries have ascending indexes. The indices may not be consecutive + * due to not-present entries or THPs. * * Any shadow entries of evicted pages, or swap entries from * shmem/tmpfs, are included in the returned array. * - * If it finds a Transparent Huge Page, head or tail, find_get_entries() - * stops at that page: the caller is likely to have a better way to handle - * the compound page as a whole, and then skip its extent, than repeatedly - * calling find_get_entries() to return all its tails. - * - * Return: the number of pages and shadow entries which were found. + * Return: The number of entries which were found. */ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices) { XA_STATE(xas, &mapping->i_pages, start); - struct page *page; - unsigned int ret = 0; - unsigned nr_entries = PAGEVEC_SIZE; + struct folio *folio; rcu_read_lock(); - while ((page = &find_get_entry(&xas, end, XA_PRESENT)->page)) { - /* - * Terminate early on finding a THP, to allow the caller to - * handle it all at once; but continue if this is hugetlbfs. - */ - if (!xa_is_value(page) && PageTransHuge(page) && - !PageHuge(page)) { - page = find_subpage(page, xas.xa_index); - nr_entries = ret + 1; - } - - indices[ret] = xas.xa_index; - pvec->pages[ret] = page; - if (++ret == nr_entries) + while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { + indices[pvec->nr] = xas.xa_index; + if (!pagevec_add(pvec, &folio->page)) break; } rcu_read_unlock(); - pvec->nr = ret; - return ret; + return pagevec_count(pvec); } /** @@ -2016,8 +1996,8 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, * not returned. * * The entries have ascending indexes. The indices may not be consecutive - * due to not-present entries, THP pages, pages which could not be locked - * or pages under writeback. + * due to not-present entries, THPs, pages which could not be locked or + * pages under writeback. * * Return: The number of entries which were found. */ diff --git a/mm/internal.h b/mm/internal.h index 3c0c807eddc6..3e32064df18d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -70,6 +70,8 @@ static inline void force_page_cache_readahead(struct address_space *mapping, unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices); +unsigned find_get_entries(struct address_space *mapping, pgoff_t start, + pgoff_t end, struct pagevec *pvec, pgoff_t *indices); bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end); /** From patchwork Thu Jul 15 03:36:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D62C47E48 for ; Thu, 15 Jul 2021 05:20:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9C39761360 for ; Thu, 15 Jul 2021 05:20:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C39761360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F3D868D0086; Thu, 15 Jul 2021 01:20:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEC308D0065; Thu, 15 Jul 2021 01:20:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D66E08D0086; Thu, 15 Jul 2021 01:20:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id A95D68D0065 for ; Thu, 15 Jul 2021 01:20:50 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8528A1808F542 for ; Thu, 15 Jul 2021 05:20:49 +0000 (UTC) X-FDA: 78363672618.37.1077017 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 1B590F00008E for ; Thu, 15 Jul 2021 05:20:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dzf8jbhkN7R5lzpEImX6pkVdJc7FyTXhvJz5CFg3tWU=; b=BybQYomrYBpiYdMYqercCq0RUl ddDfzyq+p5ekjCiis3x5ckT5UgqQCgigzCeDlb5UYMAd9ZNEZvazismBefHW5Rz/2J6gURzHCN2mc rzyYsThHUEY2DhEl62y9TReFJZOTGqqBT0YqPl4rPW1fL4Cec5WIH9s983mlXyPpP2LLzyxeDO+xw DGakx5XfUnp92ln2MK6yWfKD0FIfDPQ2t74wwzZahks1X0+2fUFJchvATBBu1o/VsY8Z+5DsJZyi0 cQ9zfwuDUXC6KLtxqnrUCBJYIY1iO4MidaBIBKD+RlR7wPlA6T6SYimccMoApRer39yF9g4KTiFSj 5BEfoe4Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tm8-0030kG-Hi; Thu, 15 Jul 2021 05:19:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 127/138] mm: Use multi-index entries in the page cache Date: Thu, 15 Jul 2021 04:36:53 +0100 Message-Id: <20210715033704.692967-128-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BybQYomr; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: efx35iep5wankp5tr9x8qsf368r71n5f X-Rspamd-Queue-Id: 1B590F00008E X-HE-Tag: 1626326448-75559 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We currently store order-N THPs as 2^N consecutive entries. While this consumes rather more memory than necessary, it also turns out to be buggy. A writeback operation which starts in the middle of a dirty THP will not notice as the dirty bit is only set on the head index. With multi-index entries, the dirty bit will be found no matter where in the THP the iteration starts. This does end up simplifying the page cache slightly, although not as much as I had hoped. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 10 ------- mm/filemap.c | 63 +++++++++++++++++++++++++---------------- mm/huge_memory.c | 20 ++++++++++--- mm/khugepaged.c | 12 +++++++- mm/migrate.c | 8 ------ mm/shmem.c | 11 ++----- 6 files changed, 68 insertions(+), 56 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bf8e978a48f2..25b1bf3b1cdb 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1078,16 +1078,6 @@ static inline unsigned int __readahead_batch(struct readahead_control *rac, VM_BUG_ON_PAGE(PageTail(page), page); array[i++] = page; rac->_batch_count += thp_nr_pages(page); - - /* - * The page cache isn't using multi-index entries yet, - * so the xas cursor needs to be manually moved to the - * next index. This can be removed once the page cache - * is converted. - */ - if (PageHead(page)) - xas_set(&xas, rac->_index + rac->_batch_count); - if (i == array_sz) break; } diff --git a/mm/filemap.c b/mm/filemap.c index 20434d7bdad8..97d17e8c76aa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -134,7 +134,6 @@ static void page_cache_delete(struct address_space *mapping, } VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - VM_BUG_ON_FOLIO(nr != 1 && shadow, folio); xas_store(&xas, shadow); xas_init_marks(&xas); @@ -276,8 +275,7 @@ void filemap_remove_folio(struct folio *folio) * from the mapping. The function expects @pvec to be sorted by page index * and is optimised for it to be dense. * It tolerates holes in @pvec (mapping entries at those indices are not - * modified). The function expects only THP head pages to be present in the - * @pvec. + * modified). The function expects only folios to be present in the @pvec. * * The function expects the i_pages lock to be held. */ @@ -312,20 +310,12 @@ static void page_cache_delete_batch(struct address_space *mapping, WARN_ON_ONCE(!folio_test_locked(folio)); - if (folio->index == xas.xa_index) - folio->mapping = NULL; - /* Leave page->index set: truncation lookup relies on it */ + folio->mapping = NULL; + /* Leave folio->index set: truncation lookup relies on it */ - /* - * Move to the next page in the vector if this is a regular - * page or the index is of the last sub-page of this compound - * page. - */ - if (folio->index + folio_nr_pages(folio) - 1 == - xas.xa_index) - i++; + i++; xas_store(&xas, NULL); - total_pages++; + total_pages += folio_nr_pages(folio); } mapping->nrpages -= total_pages; } @@ -2027,24 +2017,27 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, indices[pvec->nr] = xas.xa_index; if (!pagevec_add(pvec, &folio->page)) break; - goto next; + continue; unlock: folio_unlock(folio); put: folio_put(folio); -next: - if (!xa_is_value(folio) && folio_multi(folio)) { - xas_set(&xas, folio->index + folio_nr_pages(folio)); - /* Did we wrap on 32-bit? */ - if (!xas.xa_index) - break; - } } rcu_read_unlock(); return pagevec_count(pvec); } +static inline +bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) +{ + if (folio_single(folio) || folio_test_hugetlb(folio)) + return false; + if (index >= max) + return false; + return index < folio->index + folio_nr_pages(folio) - 1; +} + /** * find_get_pages_range - gang pagecache lookup * @mapping: The address_space to search @@ -2083,11 +2076,17 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, if (xa_is_value(folio)) continue; +again: pages[ret] = folio_file_page(folio, xas.xa_index); if (++ret == nr_pages) { *start = xas.xa_index + 1; goto out; } + if (folio_more_pages(folio, xas.xa_index, end)) { + xas.xa_index++; + folio_ref_inc(folio); + goto again; + } } /* @@ -2145,9 +2144,15 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, if (unlikely(folio != xas_reload(&xas))) goto put_page; - pages[ret] = &folio->page; +again: + pages[ret] = folio_file_page(folio, xas.xa_index); if (++ret == nr_pages) break; + if (folio_more_pages(folio, xas.xa_index, ULONG_MAX)) { + xas.xa_index++; + folio_ref_inc(folio); + goto again; + } continue; put_page: folio_put(folio); @@ -3169,6 +3174,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { +again: page = folio_file_page(folio, xas.xa_index); if (PageHWPoison(page)) goto unlock; @@ -3190,9 +3196,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, do_set_pte(vmf, page, addr); /* no need to invalidate: a not-present page won't be cached */ update_mmu_cache(vma, addr, vmf->pte); + if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { + xas.xa_index++; + folio_ref_inc(folio); + goto again; + } folio_unlock(folio); continue; unlock: + if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { + xas.xa_index++; + goto again; + } folio_unlock(folio); folio_put(folio); } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 763bf687ca92..7ea0052172a8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2638,6 +2638,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); struct deferred_split *ds_queue = get_deferred_split_queue(head); + XA_STATE(xas, &head->mapping->i_pages, head->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; @@ -2700,18 +2701,27 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) unmap_page(head); + if (mapping) { + xas_split_alloc(&xas, head, compound_order(head), + mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); + if (xas_error(&xas)) { + ret = xas_error(&xas); + goto out_unlock; + } + } + /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); if (mapping) { - XA_STATE(xas, &mapping->i_pages, page_index(head)); - /* * Check if the head page is present in page cache. * We assume all tail are present too, if head is there. */ - xa_lock(&mapping->i_pages); + xas_lock(&xas); + xas_reset(&xas); if (xas_load(&xas) != head) goto fail; + xas_split(&xas, head, thp_order(head)); } /* Prevent deferred_split_scan() touching ->_refcount */ @@ -2739,7 +2749,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) - xa_unlock(&mapping->i_pages); + xas_unlock(&xas); local_irq_enable(); remap_page(head, thp_nr_pages(head)); ret = -EBUSY; @@ -2753,6 +2763,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (mapping) i_mmap_unlock_read(mapping); out: + /* Free any memory we didn't use */ + xas_nomem(&xas, 0); count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); return ret; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6b9c98ddcd09..949b583f22c0 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1664,7 +1664,10 @@ static void collapse_file(struct mm_struct *mm, } count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC); - /* This will be less messy when we use multi-index entries */ + /* + * Ensure we have slots for all the pages in the range. This is + * almost certainly a no-op because most of the pages must be present + */ do { xas_lock_irq(&xas); xas_create_range(&xas); @@ -1884,6 +1887,9 @@ static void collapse_file(struct mm_struct *mm, __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); } + /* Join all the small entries into a single multi-index entry */ + xas_set_order(&xas, start, HPAGE_PMD_ORDER); + xas_store(&xas, new_page); xa_locked: xas_unlock_irq(&xas); xa_unlocked: @@ -2005,6 +2011,10 @@ static void khugepaged_scan_file(struct mm_struct *mm, continue; } + /* + * XXX: khugepaged should compact smaller compound pages + * into a PMD sized page + */ if (PageTransCompound(page)) { result = SCAN_PAGE_COMPOUND; break; diff --git a/mm/migrate.c b/mm/migrate.c index 36cdae0a1235..029b592a0066 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -439,14 +439,6 @@ int folio_migrate_mapping(struct address_space *mapping, } xas_store(&xas, newfolio); - if (nr > 1) { - int i; - - for (i = 1; i < nr; i++) { - xas_next(&xas); - xas_store(&xas, newfolio); - } - } /* * Drop cache reference from old page by unfreezing diff --git a/mm/shmem.c b/mm/shmem.c index 337680a01f2a..bdfa60416d68 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -670,7 +670,6 @@ static int shmem_add_to_page_cache(struct page *page, struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); - unsigned long i = 0; unsigned long nr = compound_nr(page); int error; @@ -700,17 +699,11 @@ static int shmem_add_to_page_cache(struct page *page, void *entry; xas_lock_irq(&xas); entry = xas_find_conflict(&xas); - if (entry != expected) + if (entry != expected) { xas_set_err(&xas, -EEXIST); - xas_create_range(&xas); - if (xas_error(&xas)) goto unlock; -next: - xas_store(&xas, page); - if (++i < nr) { - xas_next(&xas); - goto next; } + xas_store(&xas, page); if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); __mod_lruvec_page_state(page, NR_SHMEM_THPS, nr); From patchwork Thu Jul 15 03:36:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EF90C1B08C for ; Thu, 15 Jul 2021 05:21:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1C8D961360 for ; Thu, 15 Jul 2021 05:21:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C8D961360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 725B48D0087; Thu, 15 Jul 2021 01:21:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E1D08D0065; Thu, 15 Jul 2021 01:21:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54FF18D0087; Thu, 15 Jul 2021 01:21:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 26FBC8D0065 for ; Thu, 15 Jul 2021 01:21:26 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 05FEF183594E1 for ; Thu, 15 Jul 2021 05:21:24 +0000 (UTC) X-FDA: 78363674130.02.01E62E7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id B100A700009D for ; Thu, 15 Jul 2021 05:21:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7mM+4r3MgelJWIdGY296pIhhgFaWpEQSXi+vxd43vYc=; b=o+I6Q5JO0e9YRe5ZHrqSyKTMBT LWnzgyk1ChESJw43DpEOSq6Qr/aXeUU431Z13DS9M2LskBBAIhItt4n1Rf+GygcU9JErB21wBJYAu pFW1jNIE1ZwPaQ976e2Y4RUQQ6hQoV7A7mZQw/RFPlrPMYYXIuL0u0OHpyiOawxGvfYrw8mdBFig3 mZUI0k7MG/Z80sOec24gMya7ov1d4tV9ISwFWkTtVOMUZ+x75tTJfFLhwipIQe+tb/Dv0bgB+cBZj rQyKYZB+uqGOPGMhucHmKSmE+qWWsf7zu+NcLr3j8zHf/02JSsm9w7v/4uKO7plrpHgT4HioVX2nz B/sOfcjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tn3-0030oU-0B; Thu, 15 Jul 2021 05:20:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 128/138] iomap: Support multi-page folios in invalidatepage Date: Thu, 15 Jul 2021 04:36:54 +0100 Message-Id: <20210715033704.692967-129-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=o+I6Q5JO; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: 44x58jpoyut6b9j7spagxfcof3mchey7 X-Rspamd-Queue-Id: B100A700009D X-HE-Tag: 1626326484-852517 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we're punching a hole in a multi-page folio, we need to remove the per-page iomap data as the folio is about to be split and each page will need its own. This means that writepage can now come across a page with no iop allocated, so remove the assertion that there is already one, and just create one (with the uptodate bits set) if there isn't one. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 48de198c5603..7f78256fc0ba 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -474,13 +474,17 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* - * If we are invalidating the entire page, clear the dirty state from it - * and release it to avoid unnecessary buildup of the LRU. + * If we are invalidating the entire folio, clear the dirty state + * from it and release it to avoid unnecessary buildup of the LRU. */ if (offset == 0 && len == folio_size(folio)) { WARN_ON_ONCE(folio_test_writeback(folio)); folio_cancel_dirty(folio); iomap_page_release(folio); + } else if (folio_multi(folio)) { + /* Must release the iop so the page can be split */ + WARN_ON_ONCE(!folio_test_uptodate(folio) && folio_test_dirty(folio)); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); @@ -1300,7 +1304,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct folio *folio, loff_t end_pos) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); @@ -1308,7 +1312,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(nblocks > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* From patchwork Thu Jul 15 03:36:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAB71C07E96 for ; Thu, 15 Jul 2021 05:22:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59AA061360 for ; Thu, 15 Jul 2021 05:22:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59AA061360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA6DC8D0088; Thu, 15 Jul 2021 01:22:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7E358D0065; Thu, 15 Jul 2021 01:22:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91E9A8D0088; Thu, 15 Jul 2021 01:22:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 6DCAE8D0065 for ; Thu, 15 Jul 2021 01:22:15 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 52BB11848226B for ; Thu, 15 Jul 2021 05:22:14 +0000 (UTC) X-FDA: 78363676188.39.75DB5D0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 0A965D00049A for ; Thu, 15 Jul 2021 05:22:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OEayz0vZXxwwb3sWZ+SAgeiLQoq44uIs+KhQN0qQl24=; b=FmtWYY/oWmGpoEUgYE/RHzdYLy ocgOBwztS7HKoxdwelwFUgCnm37OXO30npMu8yaN3dlK1TusdwbGwYkdNGV3ePZORrwRm7G1YSvX/ gcln7h/IOxpmG/DdGZYaQFH9e7SeBiZ4tajKEPjtEYypq11HKvMHe3rVQT7ERSao4d3Q2JoBuWvKf h3BP6IU7GXmQDdf+iYuCwZnx6IhqYrd9jHxi89BE9Cy/PnTHxvHSiIcOpHHRS9lmwnGmHbrn/34zk IZR11hom7n94NMOoPOYAVxpEXuNLgSPQ3lHvCPlNoVvro4r8y6EW/59T+544uTjLYUvNi9nG7071g xOaxAGgw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tnc-0030rA-2g; Thu, 15 Jul 2021 05:20:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 129/138] xfs: Support THPs Date: Thu, 15 Jul 2021 04:36:55 +0100 Message-Id: <20210715033704.692967-130-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0A965D00049A X-Stat-Signature: 7n3odf75x8pygxis8qzqara57663xnan Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="FmtWYY/o"; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626326533-221754 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is one place which assumes the size of a page; fix it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 11 ++++++----- fs/xfs/xfs_super.c | 3 ++- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index cb4e0fcf4c76..9ffbd116592a 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -432,10 +432,11 @@ xfs_discard_page( struct page *page, loff_t fileoff) { - struct inode *inode = page->mapping->host; + struct folio *folio = page_folio(page); + struct inode *inode = folio->mapping->host; struct xfs_inode *ip = XFS_I(inode); struct xfs_mount *mp = ip->i_mount; - unsigned int pageoff = offset_in_page(fileoff); + size_t pageoff = offset_in_folio(folio, fileoff); xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, fileoff); xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, pageoff); int error; @@ -445,14 +446,14 @@ xfs_discard_page( xfs_alert_ratelimited(mp, "page discard on page "PTR_FMT", inode 0x%llx, offset %llu.", - page, ip->i_ino, fileoff); + folio, ip->i_ino, fileoff); error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - i_blocks_per_page(inode, page) - pageoff_fsb); + i_blocks_per_folio(inode, folio) - pageoff_fsb); if (error && !XFS_FORCED_SHUTDOWN(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: - iomap_invalidatepage(page, pageoff, PAGE_SIZE - pageoff); + iomap_invalidatepage(&folio->page, pageoff, folio_size(folio) - pageoff); } static const struct iomap_writeback_ops xfs_writeback_ops = { diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 2c9e26a44546..24adea02b887 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1891,7 +1891,8 @@ static struct file_system_type xfs_fs_type = { .init_fs_context = xfs_init_fs_context, .parameters = xfs_fs_parameters, .kill_sb = kill_block_super, - .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, + .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP | \ + FS_THP_SUPPORT, }; MODULE_ALIAS_FS("xfs"); From patchwork Thu Jul 15 03:36:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07D5EC07E96 for ; Thu, 15 Jul 2021 05:23:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A52126100A for ; Thu, 15 Jul 2021 05:23:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A52126100A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0456D8D0089; Thu, 15 Jul 2021 01:23:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 01B9C8D0065; Thu, 15 Jul 2021 01:23:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E002A8D0089; Thu, 15 Jul 2021 01:23:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id BC0778D0065 for ; Thu, 15 Jul 2021 01:23:23 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 92C571802C52D for ; Thu, 15 Jul 2021 05:23:22 +0000 (UTC) X-FDA: 78363679044.27.14AE015 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 54CC11911 for ; Thu, 15 Jul 2021 05:23:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QEEI1dbEoDD8GArjWkq2Z6EQLrcEX4Lhox3YGo6XvTs=; b=QeuiV/0mnHEiaSLCAALqI+FG00 LlhMEl6fDkOIc0i+gjVa3C/V4irQuK5E4PnGmaqXpFaDbC9LCLU64dCEe51olZrAHoSLso+ZGLdaN 5WbAXhTcjZxUuFKFbgp8XQtp5Ccy+K6hmR/Tzua21VTmcAyRYNii7l9By/+YZxoAdzdvr1BZMYWzz lVQLTnGr2CFwAH0oVcXqWhKUuxYwaplh6cb88w1hiT6IGfggK1oSPPLoRcu7et7/EbYnP3xDNu+CP wYUK3Aka3HGMV5zvzLrXOLlCcT8rzv8lIc/P+n8+bY85sIImI4B+sIP6v9RDCThQyBRfBIJe0GtQm jHqIQ5iQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3to5-0030tM-Mx; Thu, 15 Jul 2021 05:21:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 130/138] mm/truncate: Convert invalidate_inode_pages2_range to folios Date: Thu, 15 Jul 2021 04:36:56 +0100 Message-Id: <20210715033704.692967-131-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="QeuiV/0m"; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 6zo8xeym4jkie5es7ssxzyu9pe7gd5ns X-Rspamd-Queue-Id: 54CC11911 X-Rspamd-Server: rspam01 X-HE-Tag: 1626326602-698493 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we're going to unmap a folio, we have to be sure to unmap the entire folio, not just the part of it which lies after the search index. Signed-off-by: Matthew Wilcox (Oracle) --- mm/truncate.c | 62 ++++++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 30 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index b8c9d2fbd9b5..d068f22fe422 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -599,42 +599,43 @@ void invalidate_mapping_pagevec(struct address_space *mapping, * shrink_page_list() has a temp ref on them, or because they're transiently * sitting in the lru_cache_add() pagevecs. */ -static int -invalidate_complete_page2(struct address_space *mapping, struct page *page) +static int invalidate_complete_folio2(struct address_space *mapping, + struct folio *folio) { unsigned long flags; - if (page->mapping != mapping) + if (folio->mapping != mapping) return 0; - if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL)) + if (folio_has_private(folio) && + !try_to_release_page(&folio->page, GFP_KERNEL)) return 0; xa_lock_irqsave(&mapping->i_pages, flags); - if (PageDirty(page)) + if (folio_test_dirty(folio)) goto failed; - BUG_ON(page_has_private(page)); - __delete_from_page_cache(page, NULL); + BUG_ON(folio_has_private(folio)); + __filemap_remove_folio(folio, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); if (mapping->a_ops->freepage) - mapping->a_ops->freepage(page); + mapping->a_ops->freepage(&folio->page); - put_page(page); /* pagecache ref */ + folio_ref_sub(folio, folio_nr_pages(folio)); /* pagecache ref */ return 1; failed: xa_unlock_irqrestore(&mapping->i_pages, flags); return 0; } -static int do_launder_page(struct address_space *mapping, struct page *page) +static int do_launder_folio(struct address_space *mapping, struct folio *folio) { - if (!PageDirty(page)) + if (!folio_test_dirty(folio)) return 0; - if (page->mapping != mapping || mapping->a_ops->launder_page == NULL) + if (folio->mapping != mapping || mapping->a_ops->launder_page == NULL) return 0; - return mapping->a_ops->launder_page(page); + return mapping->a_ops->launder_page(&folio->page); } /** @@ -666,21 +667,21 @@ int invalidate_inode_pages2_range(struct address_space *mapping, index = start; while (find_get_entries(mapping, index, end, &pvec, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + struct folio *folio = (struct folio *)pvec.pages[i]; - /* We rely upon deletion not changing page->index */ + /* We rely upon deletion not changing folio->index */ index = indices[i]; - if (xa_is_value(page)) { + if (xa_is_value(folio)) { if (!invalidate_exceptional_entry2(mapping, - index, page)) + index, folio)) ret = -EBUSY; continue; } - if (!did_range_unmap && page_mapped(page)) { + if (!did_range_unmap && folio_mapped(folio)) { /* - * If page is mapped, before taking its lock, + * If folio is mapped, before taking its lock, * zap the rest of the file in one hit. */ unmap_mapping_pages(mapping, index, @@ -688,26 +689,27 @@ int invalidate_inode_pages2_range(struct address_space *mapping, did_range_unmap = 1; } - lock_page(page); - WARN_ON(page_to_index(page) != index); - if (page->mapping != mapping) { - unlock_page(page); + folio_lock(folio); + VM_WARN_ON_ONCE_FOLIO(!folio_contains(folio, index), + folio); + if (folio->mapping != mapping) { + folio_unlock(folio); continue; } - wait_on_page_writeback(page); + folio_wait_writeback(folio); - if (page_mapped(page)) - unmap_mapping_page(page); - BUG_ON(page_mapped(page)); + if (folio_mapped(folio)) + unmap_mapping_page(&folio->page); + BUG_ON(folio_mapped(folio)); - ret2 = do_launder_page(mapping, page); + ret2 = do_launder_folio(mapping, folio); if (ret2 == 0) { - if (!invalidate_complete_page2(mapping, page)) + if (!invalidate_complete_folio2(mapping, folio)) ret2 = -EBUSY; } if (ret2 < 0) ret = ret2; - unlock_page(page); + folio_unlock(folio); } pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); From patchwork Thu Jul 15 03:36:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC453C07E96 for ; Thu, 15 Jul 2021 05:24:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7B0C26100A for ; Thu, 15 Jul 2021 05:24:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B0C26100A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CEE9C8D008A; Thu, 15 Jul 2021 01:24:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC52A8D0065; Thu, 15 Jul 2021 01:24:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3F318D008A; Thu, 15 Jul 2021 01:24:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 91E638D0065 for ; Thu, 15 Jul 2021 01:24:04 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 88ABD1649B for ; Thu, 15 Jul 2021 05:24:03 +0000 (UTC) X-FDA: 78363680766.38.E6419A4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 3D4C46001A85 for ; Thu, 15 Jul 2021 05:24:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VtHfn0sslcY9O2vVH0P420AejCUsoqkaY5BNLZez8k8=; b=cbDjLajck0dFxp0HsF6cDTnbFH bPgp1bEhoEnt6ZaucvI8tj0gt2Dtn5gVzaJYBqj/CkhPFyZKtbaiLrrYYf+pHN6BN8Xi91drlggG5 5l8ZHwmveva9ORl4BE3kkqKerSSUT8MHJfSitvGJlWOdKa7vYz++QjYFhXZsnSKsEHECPMoJvnukr G/k7xk4lkIT0UqPXWtgvjRAfg2xPBV2rpYVYX3lCvWk4Rxc1WXd1NNfFBMLCfOEt3xVoqrQaP1P53 SyGMEMVgBePp2HWBtgAN5uPX4Sdw6Md6Fyp1jH7nmg4yIIkf/P0/m3shRTiBn0eM3kBwS9U0Syvqh mMklttNg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tpJ-0030xF-0t; Thu, 15 Jul 2021 05:22:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 131/138] mm/truncate: Fix invalidate_complete_page2 for THPs Date: Thu, 15 Jul 2021 04:36:57 +0100 Message-Id: <20210715033704.692967-132-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cbDjLajc; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: qjr688p6sdxasch7h5x4t1eiqdd1mn7f X-Rspamd-Queue-Id: 3D4C46001A85 X-HE-Tag: 1626326643-258443 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: invalidate_complete_page2() currently open-codes filemap_free_folio(), except for the part where it handles THP. Rather than adding that, call page_cache_free_page() from invalidate_complete_page2(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 3 +-- mm/internal.h | 1 + mm/truncate.c | 5 +---- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 97d17e8c76aa..d5787502c3be 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -228,8 +228,7 @@ void __filemap_remove_folio(struct folio *folio, void *shadow) page_cache_delete(mapping, folio, shadow); } -static void filemap_free_folio(struct address_space *mapping, - struct folio *folio) +void filemap_free_folio(struct address_space *mapping, struct folio *folio) { void (*freepage)(struct page *); diff --git a/mm/internal.h b/mm/internal.h index 3e32064df18d..d63ef2595eff 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -73,6 +73,7 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, unsigned find_get_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices); bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end); +void filemap_free_folio(struct address_space *mapping, struct folio *folio); /** * folio_evictable - Test whether a folio is evictable. diff --git a/mm/truncate.c b/mm/truncate.c index d068f22fe422..e000402e817b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -619,10 +619,7 @@ static int invalidate_complete_folio2(struct address_space *mapping, __filemap_remove_folio(folio, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); - if (mapping->a_ops->freepage) - mapping->a_ops->freepage(&folio->page); - - folio_ref_sub(folio, folio_nr_pages(folio)); /* pagecache ref */ + filemap_free_folio(mapping, folio); return 1; failed: xa_unlock_irqrestore(&mapping->i_pages, flags); From patchwork Thu Jul 15 03:36:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9C86C07E96 for ; Thu, 15 Jul 2021 05:24:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E2BB6100A for ; Thu, 15 Jul 2021 05:24:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E2BB6100A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B66BE8D008B; Thu, 15 Jul 2021 01:24:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3D6A8D0065; Thu, 15 Jul 2021 01:24:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DEC48D008B; Thu, 15 Jul 2021 01:24:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7BC938D0065 for ; Thu, 15 Jul 2021 01:24:41 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7089622BED for ; Thu, 15 Jul 2021 05:24:40 +0000 (UTC) X-FDA: 78363682320.23.FC4C025 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 0FE9DD0000A5 for ; Thu, 15 Jul 2021 05:24:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FvnKvrUT2fNk7/OyTjb1Ax4qDl3oGlD3/8sIT3O/4a8=; b=izkbNEHl+SIVNYKgt/mAEKaORv dk2/V/5ap64GEm2xYJiEfKnOPQsspNnlH7PnSkAmoTclYx7DfWr+QTftRAD57aysxz3g4CIXPNwMD FlBHsG60DSGcqQgh7uZIto4htO0BXbz0hWHBvc3PDd95CLGWRUU+yFiOkQU+1MJZ/FQdB+XptFF2l I+kPEEbN7ILEdF9YY16un0Z4+I6/ne7rw94akq7knkqkhL8NZhkmQq2LR6jYCCbWga+Ad1XbTwbFK 12idSWixEs1ivcfHNq1InSzZfhmmLM8dq93S87Q35QGLebRmLkoy08PLvnrnQUMJR1ZXe8K1gYspL k5LgalPQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tqK-00310g-Gq; Thu, 15 Jul 2021 05:23:38 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 132/138] mm/vmscan: Free non-shmem THPs without splitting them Date: Thu, 15 Jul 2021 04:36:58 +0100 Message-Id: <20210715033704.692967-133-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0FE9DD0000A5 X-Stat-Signature: sm6x854cjhyfa3a5ceu9w6rk3zz6kf71 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=izkbNEHl; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626326679-618284 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have to allocate memory in order to split a file-backed page, so it's not a good idea to split them. It also doesn't work for XFS because pages have an extra reference count from page_has_private() and split_huge_page() expects that reference to have already been removed. Unfortunately, we still have to split shmem THPs because we can't handle swapping out an entire THP yet. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7a2f25b904d9..8b17e46dbf32 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1470,8 +1470,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, /* Adding to swap updated mapping */ mapping = page_mapping(page); } - } else if (unlikely(PageTransHuge(page))) { - /* Split file THP */ + } else if (PageSwapBacked(page) && PageTransHuge(page)) { + /* Split shmem THP */ if (split_huge_page_to_list(page, page_list)) goto keep_locked; } From patchwork Thu Jul 15 03:36:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55C35C07E96 for ; Thu, 15 Jul 2021 05:26:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 036D5610C7 for ; Thu, 15 Jul 2021 05:26:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 036D5610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 54A4F8D008C; Thu, 15 Jul 2021 01:26:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5217E8D0065; Thu, 15 Jul 2021 01:26:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E9838D008C; Thu, 15 Jul 2021 01:26:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 1B3A28D0065 for ; Thu, 15 Jul 2021 01:26:07 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0675A824556B for ; Thu, 15 Jul 2021 05:26:06 +0000 (UTC) X-FDA: 78363685932.09.2070C74 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id BBC8550000A2 for ; Thu, 15 Jul 2021 05:26:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0j89Y9aRfbS0Rk8ILDg18YHesTGwlxkjEArXWx9pLdg=; b=rtZ8plYfg2O2vQnMqCcN8Ix1eV 0XJ3QSThuBsUacVKOjdcIEAp2lxqyCZfpWvfL3hS76LBjhmK4tzKQWTC+ibKYHQbVguyCr3NUCT/l PfiDy86CIphQ+viU7oNBndb1wti75NmlHC6gxm1d+t1Cee2mwhtd/5LFKF1TA7HA4/TBS5bfjsZBK B9GlLpfAAYV2XjdDz31IcMs6Ff/yg+qAPAiXiogB7y4H3S19Ap1QH+jLHS2AQiTBQ2T0KDtCjUZwJ ns8WUYvvrwBXkKxngBqKxGgMunIOwXrWcWD/izHm4xbuTwdbZ78ZPrjccbmsd4s/WgKzPl7y51run ct8aCKMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tqn-00312o-KS; Thu, 15 Jul 2021 05:24:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 133/138] mm: Fix READ_ONLY_THP warning Date: Thu, 15 Jul 2021 04:36:59 +0100 Message-Id: <20210715033704.692967-134-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rtZ8plYf; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: xm4fu9n6bnhfbkpuz49qxizio95yba3f X-Rspamd-Queue-Id: BBC8550000A2 X-Rspamd-Server: rspam01 X-HE-Tag: 1626326765-422890 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These counters only exist if CONFIG_READ_ONLY_THP_FOR_FS is defined, but we should not warn if the filesystem natively supports THPs. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 25b1bf3b1cdb..81cccb708df7 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -146,7 +146,7 @@ static inline void filemap_nr_thps_inc(struct address_space *mapping) if (!mapping_thp_support(mapping)) atomic_inc(&mapping->nr_thps); #else - WARN_ON_ONCE(1); + WARN_ON_ONCE(!mapping_thp_support(mapping)); #endif } @@ -156,7 +156,7 @@ static inline void filemap_nr_thps_dec(struct address_space *mapping) if (!mapping_thp_support(mapping)) atomic_dec(&mapping->nr_thps); #else - WARN_ON_ONCE(1); + WARN_ON_ONCE(!mapping_thp_support(mapping)); #endif } From patchwork Thu Jul 15 03:37:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82DC9C07E96 for ; Thu, 15 Jul 2021 05:26:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D49B61362 for ; Thu, 15 Jul 2021 05:26:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D49B61362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8A4C28D008D; Thu, 15 Jul 2021 01:26:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 87BD68D0065; Thu, 15 Jul 2021 01:26:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 743868D008D; Thu, 15 Jul 2021 01:26:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 5255B8D0065 for ; Thu, 15 Jul 2021 01:26:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 436AA21949 for ; Thu, 15 Jul 2021 05:26:58 +0000 (UTC) X-FDA: 78363688116.19.2B1AAE8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id F25FF60019B0 for ; Thu, 15 Jul 2021 05:26:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Ih/jlD7ckEePIBEtEqyFyU6U7BAhpCSlbvNH37tyxh8=; b=b5rnLKVPTKcdrUswNACQ1OF639 khSgUwMgJl4M56i7DmZ/XLOZkViWJMkzRhKIR0uTmb2cjoU+70iJGXEPYICdT15E25jAHUe3+vsn9 XwYSQB71/JuWB+a9AhbBXDlreLEpCOv2fpDEy4ZoJcuMUDam3hEfl4AzCkvlor0lEJU5Pk6udYd4H /ns3NoEm6AlN+uU1eK2xobpBk5tx7mxtIvbr4G7SZN5yhQTrv6hFb/d6jd42+WeoPGHhh/SPjLSqd sRVyWJs+z77/t0t41+aE19FpbraOGg+QZDu2H5rMKHTKRYWiu8up4xyqx+qFpErxJ07ePAP6mrHHY Zvg9DeNg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3trx-00315s-Bv; Thu, 15 Jul 2021 05:25:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 134/138] mm: Support arbitrary THP sizes Date: Thu, 15 Jul 2021 04:37:00 +0100 Message-Id: <20210715033704.692967-135-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=b5rnLKVP; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: 4pm5n4g43fonen97cgnm84rmhimbfmz7 X-Rspamd-Queue-Id: F25FF60019B0 X-HE-Tag: 1626326817-325358 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the compound size of the page instead of assuming PTE or PMD size. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 8 ++------ include/linux/mm.h | 42 ++++++++++++++++++++--------------------- 2 files changed, 23 insertions(+), 27 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f280f33ff223..b70318fe7863 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -257,9 +257,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, static inline unsigned int thp_order(struct page *page) { VM_BUG_ON_PGFLAGS(PageTail(page), page); - if (PageHead(page)) - return HPAGE_PMD_ORDER; - return 0; + return compound_order(page); } /** @@ -269,9 +267,7 @@ static inline unsigned int thp_order(struct page *page) static inline int thp_nr_pages(struct page *page) { VM_BUG_ON_PGFLAGS(PageTail(page), page); - if (PageHead(page)) - return HPAGE_PMD_NR; - return 1; + return compound_nr(page); } struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, diff --git a/include/linux/mm.h b/include/linux/mm.h index 99f5f736be64..df1f4c4976df 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -715,6 +715,27 @@ int vma_is_stack_for_current(struct vm_area_struct *vma); struct mmu_gather; struct inode; +static inline unsigned int compound_order(struct page *page) +{ + if (!PageHead(page)) + return 0; + return page[1].compound_order; +} + +/* Returns the number of pages in this potentially compound page. */ +static inline unsigned long compound_nr(struct page *page) +{ + if (!PageHead(page)) + return 1; + return page[1].compound_nr; +} + +static inline void set_compound_order(struct page *page, unsigned int order) +{ + page[1].compound_order = order; + page[1].compound_nr = 1U << order; +} + #include /* @@ -937,13 +958,6 @@ static inline void destroy_compound_page(struct page *page) compound_page_dtors[page[1].compound_dtor](page); } -static inline unsigned int compound_order(struct page *page) -{ - if (!PageHead(page)) - return 0; - return page[1].compound_order; -} - /** * folio_order - The allocation order of a folio. * @folio: The folio. @@ -981,20 +995,6 @@ static inline int compound_pincount(struct page *page) return head_compound_pincount(page); } -static inline void set_compound_order(struct page *page, unsigned int order) -{ - page[1].compound_order = order; - page[1].compound_nr = 1U << order; -} - -/* Returns the number of pages in this potentially compound page. */ -static inline unsigned long compound_nr(struct page *page) -{ - if (!PageHead(page)) - return 1; - return page[1].compound_nr; -} - /* Returns the number of bytes in this potentially compound page. */ static inline unsigned long page_size(struct page *page) { From patchwork Thu Jul 15 03:37:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3F94C07E96 for ; Thu, 15 Jul 2021 05:27:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 71AE661360 for ; Thu, 15 Jul 2021 05:27:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71AE661360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C74558D008E; Thu, 15 Jul 2021 01:27:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C24808D0065; Thu, 15 Jul 2021 01:27:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEC008D008E; Thu, 15 Jul 2021 01:27:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 8C9638D0065 for ; Thu, 15 Jul 2021 01:27:24 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6DE551847A0A1 for ; Thu, 15 Jul 2021 05:27:23 +0000 (UTC) X-FDA: 78363689166.14.1B7CD7B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 25CF6B0000A0 for ; Thu, 15 Jul 2021 05:27:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TkqtaN/E6Kzx072f6Xp9TqqLdQXoLJDrAXZBAqFQCyE=; b=FFwiokBZsoI4/L4BWe1/8l6S1A qSIUZAkTIRCU7OFhnaNX+JwyVcLWp91BdbcvtC1hEmgzXZgTwGEOdHaCVhcg5x8l3h7Q/vl+kAIHZ 0E3VQOW/DlEbLr+OcKrd2lnX1c+Y6QT5woDEC95NhiwHA+ErY7P6D+qKLApeBdVOBH9JjHI8GU4nA DkAOXq1bjYEfjdEMTbVL9EMY+tu6bBfoZv89o/GvbTax4jMPFoNkH+93TrJBV1V0iWVTBJmSBW7xA +sDur43P5eJp8D+CTs160EDeOnsV5ugIkJygxEINKNV42LcfT0kwZJ8gCOO6djS6HX8IVwEN9I18R QnwTUszQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tt3-00318U-8H; Thu, 15 Jul 2021 05:26:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 135/138] mm/filemap: Allow multi-page folios to be added to the page cache Date: Thu, 15 Jul 2021 04:37:01 +0100 Message-Id: <20210715033704.692967-136-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FFwiokBZ; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: mqh6ja9u6y1ahhjd1hmoioajkqtcdscc X-Rspamd-Queue-Id: 25CF6B0000A0 X-HE-Tag: 1626326843-57025 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We return -EEXIST if there are any non-shadow entries in the page cache in the range covered by the folio. If there are multiple shadow entries in the range, we set *shadowp to one of them (currently the one at the highest index). If that turns out to be the wrong answer, we can implement something more complex. This is mostly modelled after the equivalent function in the shmem code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 40 +++++++++++++++++++++++----------------- 1 file changed, 23 insertions(+), 17 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index d5787502c3be..1ff21b3346d3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -848,26 +848,27 @@ noinline int __filemap_add_folio(struct address_space *mapping, { XA_STATE(xas, &mapping->i_pages, index); int huge = folio_test_hugetlb(folio); - int error; bool charged = false; + unsigned int nr = 1; VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); mapping_set_update(&xas, mapping); - folio_get(folio); - folio->mapping = mapping; - folio->index = index; - if (!huge) { - error = mem_cgroup_charge(folio, NULL, gfp); + int error = mem_cgroup_charge(folio, NULL, gfp); VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); if (error) - goto error; + return error; charged = true; + xas_set_order(&xas, index, folio_order(folio)); + nr = folio_nr_pages(folio); } gfp &= GFP_RECLAIM_MASK; + folio_ref_add(folio, nr); + folio->mapping = mapping; + folio->index = xas.xa_index; do { unsigned int order = xa_get_order(xas.xa, xas.xa_index); @@ -891,6 +892,8 @@ noinline int __filemap_add_folio(struct address_space *mapping, /* entry may have been split before we acquired lock */ order = xa_get_order(xas.xa, xas.xa_index); if (order > folio_order(folio)) { + /* How to handle large swap entries? */ + BUG_ON(shmem_mapping(mapping)); xas_split(&xas, old, order); xas_reset(&xas); } @@ -900,29 +903,32 @@ noinline int __filemap_add_folio(struct address_space *mapping, if (xas_error(&xas)) goto unlock; - mapping->nrpages++; + mapping->nrpages += nr; /* hugetlb pages do not participate in page cache accounting */ - if (!huge) - __lruvec_stat_add_folio(folio, NR_FILE_PAGES); + if (!huge) { + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); + if (nr > 1) + __lruvec_stat_mod_folio(folio, + NR_FILE_THPS, nr); + } unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); - if (xas_error(&xas)) { - error = xas_error(&xas); - if (charged) - mem_cgroup_uncharge(folio); + if (xas_error(&xas)) goto error; - } trace_mm_filemap_add_to_page_cache(&folio->page); return 0; error: + if (charged) + mem_cgroup_uncharge(folio); folio->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - folio_put(folio); - return error; + folio_ref_sub(folio, nr); + VM_BUG_ON_FOLIO(folio_ref_count(folio) <= 0, folio); + return xas_error(&xas); } ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); From patchwork Thu Jul 15 03:37:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C290AC07E96 for ; Thu, 15 Jul 2021 05:28:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 746E6610C7 for ; Thu, 15 Jul 2021 05:28:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 746E6610C7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C743F8D008F; Thu, 15 Jul 2021 01:28:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFC358D0065; Thu, 15 Jul 2021 01:28:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9CB78D008F; Thu, 15 Jul 2021 01:28:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 7DEE78D0065 for ; Thu, 15 Jul 2021 01:28:04 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 73B6D18046DB4 for ; Thu, 15 Jul 2021 05:28:03 +0000 (UTC) X-FDA: 78363690846.04.F4AF888 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 39314B0002A9 for ; Thu, 15 Jul 2021 05:28:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5ILLJtmg4y7//Nl5cqO6Dt5oKOASFCS0mS2xCX9QyaA=; b=rssLRkif3t/tSVMHXk5c7aqrPF tmQzfdg5A7AVrJdCKgh6hqc3VrLK5+CXT/LRghngVX1rUsJvmB3sCMEXBrczqRSQ3UVQ/oboWCUlW O+v3hWasBVRHwszi7py/QLZ2F642acp6xDEL/WOiZvf/MrYaB26iPSdfSx1jkbGmez1q/v+reqoW1 +n00MTXt0Wfeodco8WSHQ+ijBd94fFdDGSX7O/gxAcauUcyJS6SZrILn+4i1HsDQAzJsQKkz6JhXe LmvIMAK8Mmb5sluf4xgp/VdtsNZ0/v4pGyYqPw9Vp0k8XmfB7srRXdrDdf2exrmol83pWZ2j87k/y YQ0Il39g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3ttl-0031BC-3m; Thu, 15 Jul 2021 05:27:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 136/138] mm/vmscan: Optimise shrink_page_list for smaller THPs Date: Thu, 15 Jul 2021 04:37:02 +0100 Message-Id: <20210715033704.692967-137-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rssLRkif; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: atwdhwkwmbpsf19f6ebozafb4p5b1qhy X-Rspamd-Queue-Id: 39314B0002A9 X-HE-Tag: 1626326883-1521 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A THP which is smaller than a PMD does not need to do the extra work in try_to_unmap() of trying to split a PMD entry. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8b17e46dbf32..433956675107 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1496,7 +1496,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, enum ttu_flags flags = TTU_BATCH_FLUSH; bool was_swapbacked = PageSwapBacked(page); - if (unlikely(PageTransHuge(page))) + if (PageTransHuge(page) && + thp_order(page) >= HPAGE_PMD_ORDER) flags |= TTU_SPLIT_HUGE_PMD; try_to_unmap(page, flags); From patchwork Thu Jul 15 03:37:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AC21C07E96 for ; Thu, 15 Jul 2021 05:28:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C44A061360 for ; Thu, 15 Jul 2021 05:28:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C44A061360 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1EB5A8D0090; Thu, 15 Jul 2021 01:28:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19BEF8D0065; Thu, 15 Jul 2021 01:28:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03C348D0090; Thu, 15 Jul 2021 01:28:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id D0CAB8D0065 for ; Thu, 15 Jul 2021 01:28:49 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ACB60824556B for ; Thu, 15 Jul 2021 05:28:48 +0000 (UTC) X-FDA: 78363692736.26.958442D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 5F251B00008E for ; Thu, 15 Jul 2021 05:28:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FD2gG4brhIvOYgZzfQoJ9/1AxwOQYZQ46OyI5tUckCA=; b=SYbu/WSISR0OiMxHwsnROIbwiK iPnCtiIVjw9xWzAeqyjgYsDzT0pdoVYcQjCZPdaLdNY2xLSsXW912w0k0UmV3S7nEvYaZs0KxI4Pg JCVR79yjk93+6Pla4R4kGKODpfMdz01ENfm6PAbOZnRT+lTNCkhGkan/IdKKGPosUwuqFy6tIIDNT /mW2JGq5loV3vAeGEtJcchcnlfc6iv75pkOU2i+WLxJ+xdtQ80ydCJD0DJTTzRZhaoltbsIZDTF0l TDCa04GOC61jujcwdwwtNicHtSsijl3NjaS4ZSoj0T6A/fv9gQp+k0ixNUvnGs4K0PvWPkBEe+NjM q4FDCKDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tuD-0031Hp-Rs; Thu, 15 Jul 2021 05:27:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 137/138] mm/readahead: Convert page_cache_async_ra() to take a folio Date: Thu, 15 Jul 2021 04:37:03 +0100 Message-Id: <20210715033704.692967-138-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5F251B00008E X-Stat-Signature: sg44qcofhe6q46x7wp3oe5w9dokcnn6d Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="SYbu/WSI"; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626326928-71890 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This lets us pass the folio in directly from filemap_readahead(), but its primary reason is to enable us to pass a folio to ondemand_readahead() in the next patch. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 4 ++-- mm/filemap.c | 5 +++-- mm/readahead.c | 6 +++--- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 81cccb708df7..d3f0b68ea3b1 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -955,7 +955,7 @@ struct readahead_control { void page_cache_ra_unbounded(struct readahead_control *, unsigned long nr_to_read, unsigned long lookahead_count); void page_cache_sync_ra(struct readahead_control *, unsigned long req_count); -void page_cache_async_ra(struct readahead_control *, struct page *, +void page_cache_async_ra(struct readahead_control *, struct folio *, unsigned long req_count); void readahead_expand(struct readahead_control *ractl, loff_t new_start, size_t new_len); @@ -1002,7 +1002,7 @@ void page_cache_async_readahead(struct address_space *mapping, struct page *page, pgoff_t index, unsigned long req_count) { DEFINE_READAHEAD(ractl, file, ra, mapping, index); - page_cache_async_ra(&ractl, page, req_count); + page_cache_async_ra(&ractl, page_folio(page), req_count); } static inline struct folio *__readahead_folio(struct readahead_control *ractl) diff --git a/mm/filemap.c b/mm/filemap.c index 1ff21b3346d3..ee7c72b4edee 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2419,10 +2419,11 @@ static int filemap_readahead(struct kiocb *iocb, struct file *file, struct address_space *mapping, struct folio *folio, pgoff_t last_index) { + DEFINE_READAHEAD(ractl, file, &file->f_ra, mapping, folio->index); + if (iocb->ki_flags & IOCB_NOIO) return -EAGAIN; - page_cache_async_readahead(mapping, &file->f_ra, file, &folio->page, - folio->index, last_index - folio->index); + page_cache_async_ra(&ractl, folio, last_index - folio->index); return 0; } diff --git a/mm/readahead.c b/mm/readahead.c index d589f147f4c2..e1df44ad57ed 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -580,7 +580,7 @@ void page_cache_sync_ra(struct readahead_control *ractl, EXPORT_SYMBOL_GPL(page_cache_sync_ra); void page_cache_async_ra(struct readahead_control *ractl, - struct page *page, unsigned long req_count) + struct folio *folio, unsigned long req_count) { /* no read-ahead */ if (!ractl->ra->ra_pages) @@ -589,10 +589,10 @@ void page_cache_async_ra(struct readahead_control *ractl, /* * Same bit is used for PG_readahead and PG_reclaim. */ - if (PageWriteback(page)) + if (folio_test_writeback(folio)) return; - ClearPageReadahead(page); + folio_clear_readahead(folio); /* * Defer asynchronous read-ahead on IO congestion. From patchwork Thu Jul 15 03:37:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12379017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38282C47E48 for ; Thu, 15 Jul 2021 05:29:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DCD1261279 for ; Thu, 15 Jul 2021 05:29:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCD1261279 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3BF638D0091; Thu, 15 Jul 2021 01:29:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36F3F8D0065; Thu, 15 Jul 2021 01:29:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 237238D0091; Thu, 15 Jul 2021 01:29:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id EB5C38D0065 for ; Thu, 15 Jul 2021 01:29:29 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D5EBA10F6C for ; Thu, 15 Jul 2021 05:29:28 +0000 (UTC) X-FDA: 78363694416.37.7CE748B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 0F398801F25E for ; Thu, 15 Jul 2021 05:29:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XiMUvgaQRDYtKInq/h4jgn0syV52emYjHtsQoYohWY4=; b=uXEmtwKK4BYhtjvIXP5eOWI8a6 7Dj9kZ96pnzLmW5EdnGCY/fBzRzRjcxFsOrIcBr9sWGfUzQtR3LwnfAfp6894LET/mIkzV7BL3K4a 4jaXee88MmQJDcyVz2tLHWF3GhdbXCrM14RsVtnlQqx/0Lv5EEkTHhUFMLeJHUD54I1NRwY9nwQXk ni7/rKoyHH3v+66NlAExxvUdkmxUUMzcC9/UGqgQ46+NVKREUMDARCSnQkTjLdW1Qac6k2EyAyJdk Jn7uU+gzBpzuFGcefZ/o+bMzq9LxSspKDLnJJYm4kfWaX7BdlbPl56xpssJY8V/Oeee3/l3ka5J+L b5oKWFKw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tuu-0031Js-3E; Thu, 15 Jul 2021 05:28:28 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 138/138] mm/readahead: Add multi-page folio readahead Date: Thu, 15 Jul 2021 04:37:04 +0100 Message-Id: <20210715033704.692967-139-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0F398801F25E X-Stat-Signature: 9cqc4j18sgfn9e5gnpadi4tjr1kochxw Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uXEmtwKK; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1626326967-418728 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the filesystem supports multi-page folios, allocate larger pages in the readahead code when it seems worth doing. The heuristic for choosing larger page sizes will surely need some tuning, but this aggressive ramp-up has been good for testing. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 102 +++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 95 insertions(+), 7 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index e1df44ad57ed..27e76cc2a9ba 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -149,7 +149,7 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages, blk_finish_plug(&plug); - BUG_ON(!list_empty(pages)); + BUG_ON(pages && !list_empty(pages)); BUG_ON(readahead_count(rac)); out: @@ -430,11 +430,99 @@ static int try_context_readahead(struct address_space *mapping, return 1; } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index, + pgoff_t mark, unsigned int order, gfp_t gfp) +{ + int err; + struct folio *folio = filemap_alloc_folio(gfp, order); + + if (!folio) + return -ENOMEM; + if (mark - index < (1UL << order)) + folio_set_readahead(folio); + err = filemap_add_folio(ractl->mapping, folio, index, gfp); + if (err) + folio_put(folio); + else + ractl->_nr_pages += 1UL << order; + return err; +} + +static void page_cache_ra_order(struct readahead_control *ractl, + struct file_ra_state *ra, unsigned int new_order) +{ + struct address_space *mapping = ractl->mapping; + pgoff_t index = readahead_index(ractl); + pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; + pgoff_t mark = index + ra->size - ra->async_size; + int err = 0; + gfp_t gfp = readahead_gfp_mask(mapping); + + if (!mapping_thp_support(mapping) || ra->size < 4) + goto fallback; + + limit = min(limit, index + ra->size - 1); + + /* Grow page size up to PMD size */ + if (new_order < HPAGE_PMD_ORDER) { + new_order += 2; + if (new_order > HPAGE_PMD_ORDER) + new_order = HPAGE_PMD_ORDER; + while ((1 << new_order) > ra->size) + new_order--; + } + + while (index <= limit) { + unsigned int order = new_order; + + /* Align with smaller pages if needed */ + if (index & ((1UL << order) - 1)) { + order = __ffs(index); + if (order == 1) + order = 0; + } + /* Don't allocate pages past EOF */ + while (index + (1UL << order) - 1 > limit) { + if (--order == 1) + order = 0; + } + err = ra_alloc_folio(ractl, index, mark, order, gfp); + if (err) + break; + index += 1UL << order; + } + + if (index > limit) { + ra->size += index - limit - 1; + ra->async_size += index - limit - 1; + } + + read_pages(ractl, NULL, false); + + /* + * If there were already pages in the page cache, then we may have + * left some gaps. Let the regular readahead code take care of this + * situation. + */ + if (!err) + return; +fallback: + do_page_cache_ra(ractl, ra->size, ra->async_size); +} +#else +static void page_cache_ra_order(struct readahead_control *ractl, + struct file_ra_state *ra, unsigned int order) +{ + do_page_cache_ra(ractl, ra->size, ra->async_size); +} +#endif + /* * A minimal readahead algorithm for trivial sequential/random reads. */ static void ondemand_readahead(struct readahead_control *ractl, - bool hit_readahead_marker, unsigned long req_size) + struct folio *folio, unsigned long req_size) { struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host); struct file_ra_state *ra = ractl->ra; @@ -469,12 +557,12 @@ static void ondemand_readahead(struct readahead_control *ractl, } /* - * Hit a marked page without valid readahead state. + * Hit a marked folio without valid readahead state. * E.g. interleaved reads. * Query the pagecache for async_size, which normally equals to * readahead size. Ramp it up and use it as the new readahead size. */ - if (hit_readahead_marker) { + if (folio) { pgoff_t start; rcu_read_lock(); @@ -547,7 +635,7 @@ static void ondemand_readahead(struct readahead_control *ractl, } ractl->_index = ra->start; - do_page_cache_ra(ractl, ra->size, ra->async_size); + page_cache_ra_order(ractl, ra, folio ? folio_order(folio) : 0); } void page_cache_sync_ra(struct readahead_control *ractl, @@ -575,7 +663,7 @@ void page_cache_sync_ra(struct readahead_control *ractl, } /* do read-ahead */ - ondemand_readahead(ractl, false, req_count); + ondemand_readahead(ractl, NULL, req_count); } EXPORT_SYMBOL_GPL(page_cache_sync_ra); @@ -604,7 +692,7 @@ void page_cache_async_ra(struct readahead_control *ractl, return; /* do read-ahead */ - ondemand_readahead(ractl, true, req_count); + ondemand_readahead(ractl, folio, req_count); } EXPORT_SYMBOL_GPL(page_cache_async_ra);