From patchwork Wed Mar 13 17:02:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE9FDC54E66 for ; Wed, 13 Mar 2024 17:03:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BBCA80042; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46BD3940010; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 333E480042; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1F273940010 for ; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8921E403A4 for ; Wed, 13 Mar 2024 17:03:07 +0000 (UTC) X-FDA: 81892636014.01.AA3320A Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by imf27.hostedemail.com (Postfix) with ESMTP id CB04240022 for ; Wed, 13 Mar 2024 17:03:05 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=ePofVGEs; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf27.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0O7jkYtxzM0DX7ZIGW+ZLQ370PUm3ZF/59ff/K84oK4=; b=Ot9sOuhHNDDjG+0baeTVlBA4dOJxPBv8MosqtDrhdGXLvO1czUdnwTgmPq6ZDHOIqyF1wb q5rhs3XFEnuRU0fmXLyPYru7MlzqzHEzOYaT4KICM3cuEFl5vgBCU6u2kajkcK94GT8UNS qY1tmu/VH36YUUvLWK/Z+O3nGk9iu9I= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=ePofVGEs; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf27.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349386; a=rsa-sha256; cv=none; b=Ql6pyHjdntz6NFH4pvr6q2gno2u/PC9Os/Wd6nrsoevbmbIPsBb3ZHcI8Ei1HvXAzvZZlL anllYSFZYRvJDBpwu3477RUQhW6NdljDmORzKshSzXEXjnZXqwzNkuiGzzdJpA/XNhc2H2 FeUKZM9Ql6vqo+65kmoA3tGm6fmfjJQ= Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4TvxfF6kKKz9scJ; Wed, 13 Mar 2024 18:03:01 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0O7jkYtxzM0DX7ZIGW+ZLQ370PUm3ZF/59ff/K84oK4=; b=ePofVGEsoAuYyqzVJOkLIjTDjgCqkUNNOdF5wzwok0juW1t96f6PLR+0tjm1jOKGs4hvX2 3677Dz+o7Boi66JMyxVV3+rcqBM/OQu7ZXWHFTDqbmsoC/7VAPHy8TXOPLcs2xNnWRdkls 5zLtrlq5B3Db6hJMkwoK7tF/b70+gbA+5G2XTJN6N/txcjTwGP+K/vtKUQExim+4ia4n7n ufxGGEgxOz5AjhMheKmu+XTB6CPt+uDyD+W2r3Efm4IK01gSenpsNoFRF73WLWeosdEIZA +kfkH2IJb5NGiYEd/YRBYCG5IsKjX7RavQEu4I7XeTAXcIo7nWlaRlfBUhvZ4A== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org Subject: [PATCH v3 01/11] mm: Support order-1 folios in the page cache Date: Wed, 13 Mar 2024 18:02:43 +0100 Message-ID: <20240313170253.2324812-2-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CB04240022 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 56u88yko9fo9mt9dnzaws4xcp6p3izdx X-HE-Tag: 1710349385-85794 X-HE-Meta: U2FsdGVkX1/JK3hG3FMo9EdVDnr62m4gUS+ZDw51WgPD4GSZzR5YcpAUzqHdFVJJCcdbnJVGJj3TtQ1elurEufPoIfvJUrG/XxwQigHLGO9TCukel5bfpEaQbb8CMjovwf3Ex1S5iZ9IrTbejj7NzauiWLt+xzZOiHfWEtkr1k+TocL0sUAQypw9CUb29vKdPx8LWsecdNfnb65ftArUyEHeGuU/xsqyFHG5Efb+zw6fRzlzhprHdjvXy6mNjJkbdPCzmruhOhXuHJ+gSZGbz7jY27Mwf2BWmyR1TjwWEEC7QngIwv1tLeF6HqpJdAvEUb1Mtpu/kHTnW611X5ySlnj2zxJQgWRqFYu4HvJ3s1bPFBQ4xHnw9KAl4CRazqLBdsyDWJWEDjFxOmHVfODcwkylapxnhPVywgTZG65yh15KkKrxCCdRNTKveynqvWqAMIHDRtSlkUfLu1HdFk6l1jPfDXeRouC4ghQQNK0HX4BNRuKguvHQ0c1M+I9Yw2qIK83e9xvD6+20BCImc0vi7jaOXHlmNFK1lp3SYOT531VnfyRaA4b4AmKr/S2Y9iIXLYnME2Br3/gEKcejBHkiCh43tLb9QDvHojfhEwQI9a+hZYQq6RK2Cn44mpPwsaKHysLOVD58i1LNwxGhzfOlmQQfjvMzNyaB0q4T5LUdUej5tHuL+dM2C5JTLJpG2uiMYTqnhyOvbsktFTds9HzuGRIAjMNmSK5lTnb5qyWW2Gp7N2b4dsJg/s29WCXNFPPbWmYr8yaWMldw+d45MOf7AZnb7weIHXVn7S6CRefDvrxEBmkGRYJbEXBspKULXRiF6XYKxSksp5RdEtIXsNVIo3db/z4ewVuOLxHhJl6losGYeO3LGDboCtfyNMcaw2zDNshw/WVGbTLWaYccRd1cyiSNeWTtti8QsxoUx65rkQC1REMovF+b7HRDWIHBznRvWgm6/oYO4iIJ6czt/8L tc9KJeVi T/cy2lL4JswWp8vgIt1+/iiWzKwep2Pqz7DTxLSSkF2TBmNHcNV7B/DgpMQ0mouqnPLICdxfnxPMX6o1FWmLYIkXFriEwdr+HxpCX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Folios of order 1 have no space to store the deferred list. This is not a problem for the page cache as file-backed folios are never placed on the deferred list. All we need to do is prevent the core MM from touching the deferred list for order 1 folios and remove the code which prevented us from allocating order 1 folios. Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/ Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 7 +++++-- mm/filemap.c | 2 -- mm/huge_memory.c | 23 ++++++++++++++++++----- mm/internal.h | 4 +--- mm/readahead.c | 3 --- 5 files changed, 24 insertions(+), 15 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..916a2a539517 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -void folio_prep_large_rmappable(struct folio *folio); +struct folio *folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return 0; } -static inline void folio_prep_large_rmappable(struct folio *folio) {} +static inline struct folio *folio_prep_large_rmappable(struct folio *folio) +{ + return folio; +} #define transparent_hugepage_flags 0UL diff --git a/mm/filemap.c b/mm/filemap.c index 4a30de98a8c7..a1cb3ea55fb6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, gfp_t alloc_gfp = gfp; err = -ENOMEM; - if (order == 1) - order = 0; if (order > 0) alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN; folio = filemap_alloc_folio(alloc_gfp, order); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..81fd1ba57088 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio) } #endif -void folio_prep_large_rmappable(struct folio *folio) +struct folio *folio_prep_large_rmappable(struct folio *folio) { - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); - INIT_LIST_HEAD(&folio->_deferred_list); + if (!folio || !folio_test_large(folio)) + return folio; + if (folio_order(folio) > 1) + INIT_LIST_HEAD(&folio->_deferred_list); folio_set_large_rmappable(folio); + + return folio; } static inline bool is_transparent_hugepage(struct folio *folio) @@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); if (folio_ref_freeze(folio, 1 + extra_pins)) { - if (!list_empty(&folio->_deferred_list)) { + if (folio_order(folio) > 1 && + !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; list_del(&folio->_deferred_list); } @@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio) struct deferred_split *ds_queue; unsigned long flags; + if (folio_order(folio) <= 1) + return; + /* * At this point, there is no one trying to add the folio to * deferred_list. If folio is not in deferred_list, it's safe @@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio) #endif unsigned long flags; - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + /* + * Order 1 folios have no space for a deferred list, but we also + * won't waste much memory by not adding them to the deferred list. + */ + if (folio_order(folio) <= 1) + return; /* * The try_to_unmap() in page reclaim path might reach here too, diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5174b5b0c344 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page) { struct folio *folio = (struct folio *)page; - if (folio && folio_order(folio) > 1) - folio_prep_large_rmappable(folio); - return folio; + return folio_prep_large_rmappable(folio); } static inline void prep_compound_head(struct page *page, unsigned int order) diff --git a/mm/readahead.c b/mm/readahead.c index 2648ec4f0494..369c70e2be42 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl, /* Don't allocate pages past EOF */ while (index + (1UL << order) - 1 > limit) order--; - /* THP machinery does not support order-1 */ - if (order == 1) - order = 0; err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) break;