From patchwork Wed Mar 13 17:02:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE9FDC54E66 for ; Wed, 13 Mar 2024 17:03:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BBCA80042; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46BD3940010; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 333E480042; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1F273940010 for ; Wed, 13 Mar 2024 13:03:08 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8921E403A4 for ; Wed, 13 Mar 2024 17:03:07 +0000 (UTC) X-FDA: 81892636014.01.AA3320A Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by imf27.hostedemail.com (Postfix) with ESMTP id CB04240022 for ; Wed, 13 Mar 2024 17:03:05 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=ePofVGEs; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf27.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0O7jkYtxzM0DX7ZIGW+ZLQ370PUm3ZF/59ff/K84oK4=; b=Ot9sOuhHNDDjG+0baeTVlBA4dOJxPBv8MosqtDrhdGXLvO1czUdnwTgmPq6ZDHOIqyF1wb q5rhs3XFEnuRU0fmXLyPYru7MlzqzHEzOYaT4KICM3cuEFl5vgBCU6u2kajkcK94GT8UNS qY1tmu/VH36YUUvLWK/Z+O3nGk9iu9I= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=ePofVGEs; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf27.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349386; a=rsa-sha256; cv=none; b=Ql6pyHjdntz6NFH4pvr6q2gno2u/PC9Os/Wd6nrsoevbmbIPsBb3ZHcI8Ei1HvXAzvZZlL anllYSFZYRvJDBpwu3477RUQhW6NdljDmORzKshSzXEXjnZXqwzNkuiGzzdJpA/XNhc2H2 FeUKZM9Ql6vqo+65kmoA3tGm6fmfjJQ= Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4TvxfF6kKKz9scJ; Wed, 13 Mar 2024 18:03:01 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0O7jkYtxzM0DX7ZIGW+ZLQ370PUm3ZF/59ff/K84oK4=; b=ePofVGEsoAuYyqzVJOkLIjTDjgCqkUNNOdF5wzwok0juW1t96f6PLR+0tjm1jOKGs4hvX2 3677Dz+o7Boi66JMyxVV3+rcqBM/OQu7ZXWHFTDqbmsoC/7VAPHy8TXOPLcs2xNnWRdkls 5zLtrlq5B3Db6hJMkwoK7tF/b70+gbA+5G2XTJN6N/txcjTwGP+K/vtKUQExim+4ia4n7n ufxGGEgxOz5AjhMheKmu+XTB6CPt+uDyD+W2r3Efm4IK01gSenpsNoFRF73WLWeosdEIZA +kfkH2IJb5NGiYEd/YRBYCG5IsKjX7RavQEu4I7XeTAXcIo7nWlaRlfBUhvZ4A== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org Subject: [PATCH v3 01/11] mm: Support order-1 folios in the page cache Date: Wed, 13 Mar 2024 18:02:43 +0100 Message-ID: <20240313170253.2324812-2-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CB04240022 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 56u88yko9fo9mt9dnzaws4xcp6p3izdx X-HE-Tag: 1710349385-85794 X-HE-Meta: U2FsdGVkX1/JK3hG3FMo9EdVDnr62m4gUS+ZDw51WgPD4GSZzR5YcpAUzqHdFVJJCcdbnJVGJj3TtQ1elurEufPoIfvJUrG/XxwQigHLGO9TCukel5bfpEaQbb8CMjovwf3Ex1S5iZ9IrTbejj7NzauiWLt+xzZOiHfWEtkr1k+TocL0sUAQypw9CUb29vKdPx8LWsecdNfnb65ftArUyEHeGuU/xsqyFHG5Efb+zw6fRzlzhprHdjvXy6mNjJkbdPCzmruhOhXuHJ+gSZGbz7jY27Mwf2BWmyR1TjwWEEC7QngIwv1tLeF6HqpJdAvEUb1Mtpu/kHTnW611X5ySlnj2zxJQgWRqFYu4HvJ3s1bPFBQ4xHnw9KAl4CRazqLBdsyDWJWEDjFxOmHVfODcwkylapxnhPVywgTZG65yh15KkKrxCCdRNTKveynqvWqAMIHDRtSlkUfLu1HdFk6l1jPfDXeRouC4ghQQNK0HX4BNRuKguvHQ0c1M+I9Yw2qIK83e9xvD6+20BCImc0vi7jaOXHlmNFK1lp3SYOT531VnfyRaA4b4AmKr/S2Y9iIXLYnME2Br3/gEKcejBHkiCh43tLb9QDvHojfhEwQI9a+hZYQq6RK2Cn44mpPwsaKHysLOVD58i1LNwxGhzfOlmQQfjvMzNyaB0q4T5LUdUej5tHuL+dM2C5JTLJpG2uiMYTqnhyOvbsktFTds9HzuGRIAjMNmSK5lTnb5qyWW2Gp7N2b4dsJg/s29WCXNFPPbWmYr8yaWMldw+d45MOf7AZnb7weIHXVn7S6CRefDvrxEBmkGRYJbEXBspKULXRiF6XYKxSksp5RdEtIXsNVIo3db/z4ewVuOLxHhJl6losGYeO3LGDboCtfyNMcaw2zDNshw/WVGbTLWaYccRd1cyiSNeWTtti8QsxoUx65rkQC1REMovF+b7HRDWIHBznRvWgm6/oYO4iIJ6czt/8L tc9KJeVi T/cy2lL4JswWp8vgIt1+/iiWzKwep2Pqz7DTxLSSkF2TBmNHcNV7B/DgpMQ0mouqnPLICdxfnxPMX6o1FWmLYIkXFriEwdr+HxpCX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Folios of order 1 have no space to store the deferred list. This is not a problem for the page cache as file-backed folios are never placed on the deferred list. All we need to do is prevent the core MM from touching the deferred list for order 1 folios and remove the code which prevented us from allocating order 1 folios. Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/ Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 7 +++++-- mm/filemap.c | 2 -- mm/huge_memory.c | 23 ++++++++++++++++++----- mm/internal.h | 4 +--- mm/readahead.c | 3 --- 5 files changed, 24 insertions(+), 15 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..916a2a539517 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -void folio_prep_large_rmappable(struct folio *folio); +struct folio *folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return 0; } -static inline void folio_prep_large_rmappable(struct folio *folio) {} +static inline struct folio *folio_prep_large_rmappable(struct folio *folio) +{ + return folio; +} #define transparent_hugepage_flags 0UL diff --git a/mm/filemap.c b/mm/filemap.c index 4a30de98a8c7..a1cb3ea55fb6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, gfp_t alloc_gfp = gfp; err = -ENOMEM; - if (order == 1) - order = 0; if (order > 0) alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN; folio = filemap_alloc_folio(alloc_gfp, order); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..81fd1ba57088 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio) } #endif -void folio_prep_large_rmappable(struct folio *folio) +struct folio *folio_prep_large_rmappable(struct folio *folio) { - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); - INIT_LIST_HEAD(&folio->_deferred_list); + if (!folio || !folio_test_large(folio)) + return folio; + if (folio_order(folio) > 1) + INIT_LIST_HEAD(&folio->_deferred_list); folio_set_large_rmappable(folio); + + return folio; } static inline bool is_transparent_hugepage(struct folio *folio) @@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); if (folio_ref_freeze(folio, 1 + extra_pins)) { - if (!list_empty(&folio->_deferred_list)) { + if (folio_order(folio) > 1 && + !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; list_del(&folio->_deferred_list); } @@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio) struct deferred_split *ds_queue; unsigned long flags; + if (folio_order(folio) <= 1) + return; + /* * At this point, there is no one trying to add the folio to * deferred_list. If folio is not in deferred_list, it's safe @@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio) #endif unsigned long flags; - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + /* + * Order 1 folios have no space for a deferred list, but we also + * won't waste much memory by not adding them to the deferred list. + */ + if (folio_order(folio) <= 1) + return; /* * The try_to_unmap() in page reclaim path might reach here too, diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5174b5b0c344 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page) { struct folio *folio = (struct folio *)page; - if (folio && folio_order(folio) > 1) - folio_prep_large_rmappable(folio); - return folio; + return folio_prep_large_rmappable(folio); } static inline void prep_compound_head(struct page *page, unsigned int order) diff --git a/mm/readahead.c b/mm/readahead.c index 2648ec4f0494..369c70e2be42 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl, /* Don't allocate pages past EOF */ while (index + (1UL << order) - 1 > limit) order--; - /* THP machinery does not support order-1 */ - if (order == 1) - order = 0; err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) break; From patchwork Wed Mar 13 17:02:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67DA8C54E67 for ; Wed, 13 Mar 2024 17:03:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A89C880045; Wed, 13 Mar 2024 13:03:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A3971940010; Wed, 13 Mar 2024 13:03:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88D1A940026; Wed, 13 Mar 2024 13:03:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 71939940010 for ; Wed, 13 Mar 2024 13:03:15 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 35CC1405F8 for ; Wed, 13 Mar 2024 17:03:15 +0000 (UTC) X-FDA: 81892636350.18.9C8A5F3 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf11.hostedemail.com (Postfix) with ESMTP id EBC584002E for ; Wed, 13 Mar 2024 17:03:08 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=FKXPUs2f; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf11.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349389; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wozYDNbmy2bEoKve6FiU3MASCD8dtvfm2o2W8jk+gfE=; b=Bl7CCwUJ2yygQo1pafm1RyAKhJ9mpOUB2GLoVWHqEVIqSgC2HmHOeACNtPME0NlDdN7scP fesbhmU5g+W8Fzn0aBfX0urVt14LVPz0JpgiIWxGOSOe9Dkco30Ay5nd7eXg2ZQCpwYjGU T2SliUsYZm3G0P1smh66H74bbqNIjkc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=FKXPUs2f; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf11.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349389; a=rsa-sha256; cv=none; b=KoY/X0pshXFSe7SgRJwhSY8L7ANytWp2Frhhhl5OF9q7ltjsOi7T1pguN9lUN8vW/GkhIO xKjCSEbIpcEFpICMtXsSfltsIEZAdfbvEq8Zi3jnkqGM4aHa6mp6/lgypSmC+lT6lcxjTm 4wdd0lqeAr3Dg9VZKtYNJTcB+imBsYs= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4TvxfK4Tbyz9t6h; Wed, 13 Mar 2024 18:03:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349385; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wozYDNbmy2bEoKve6FiU3MASCD8dtvfm2o2W8jk+gfE=; b=FKXPUs2fuDsWqd/RzRu5+rfCFIpIrPx1U/+fFFTvFbtEhJZ8SHhGthBo8ybr9J0e992Svz pL6C17UJGmdo7r76XOQiQIkixF8r94mI5JB/lkuBCCigSVIkSmN7jHfl3raHipIQ5KAZWb c5Y2eUfhVSB3Z4l6Hj9Vk+Xpk0nZNezMEiaGU+agMZsFVIThR/QPChspgVvoelbmtIKFoo N+9tXSlYmyp77ZLvY7jjB5YwrSO64t+YMYuCAqqIr9//yrcOHRBt1b+m0sOkNlhKL0iDNJ pMAvyd+M8Lb7xTSjI+NfUl+xzmnpF751DjHQbTqryYQnvF9J4GcPLuwO63LOnw== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 02/11] fs: Allow fine-grained control of folio sizes Date: Wed, 13 Mar 2024 18:02:44 +0100 Message-ID: <20240313170253.2324812-3-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EBC584002E X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ehstp7pr39i9r1n83nka5rr9c5d8sdmm X-HE-Tag: 1710349388-488613 X-HE-Meta: U2FsdGVkX1/k09lGfQiTE+wUxk1jl1cxpDrj1Qe82Dw/BDTAzY1SGt0AJcuzbWuPc9eYWbLBM1EN7DEx7FDuBTLWMFriliu6bLdw0reKKPxq7hv9TC0rMSOykF1hF0TV6pycgNj5vltiXQ2C6qNk9RCag/V5jANcNMqTXGLoD9VE6W5/nz1v6k16p3vJl6MtYJoYGbvAn5T5JFcaXPfkLbMS6YKEioFPCEi8NKMH+BsQq3qEXVkrufQYXZPregyvqX31otyckm88uETQcMbhDl2Asgm5oD+HAkqEXO5y4JfqSnqRKOnTEohxbI08bpC/DKQJCJkfj5zMjJaxpHtRX3PuHvIf6mzrnP6w+HTEN04FyUzJnncRxSi9xALV45IhQ272jX3yI+AOuzvct3e3nnKycRLmxXUgUlLaFTeIZGiMOB4Q9X1NXmaxbdqTqBxGVBeKclCiL3eqDcmi53uXKuSj1XKFJtJR+k+3Vy/3oxF4MAqnXPE6ZVPGZNbxuk68vyFDFqPKt8Ii16SlrdjS8ngPm3MKGUBAhWqblJDa8oETzyuY0h8GM7SQb3nJhbfR71HFz9g62I1KeTCwB4ldlFM6Fc3fvsPgrcL3KbDaaN0ljv/Kx6k6+kanm8U8GRvwgA9EYivpfe0ym6OpIahfzPZ/0RKAxLyobyuwDZye5et6t8kCx8l/EqTHQj7yl0KYbkCIjcQSiI6qUJeXia9ykhJAkmXucX9DI5jmAb/JXnFuNpJ2pYoR48HUGbAg3+g2J3lOZF4oCBddseJAcKsRUWpsqFCfGtHvztpd4GOKZRvD4nfQ30FNG+brrvKYeUQ+hYQe/f40oOYMcvOrJTSqfyR8J6mvYaKGPZPzaTEVsigCoEHbWPGjQt0VS8H61sDsjhJfiCKyqjMXiB/DVSpEmDjgur1IecToUb4jgvnMZKfdzF0DOM/5EXEdUHPIvV8CnyKJ1Iy/qRvLOtjTUl/ I4ytXtY3 neN/SuFM/fR0EktlX7a9dDi9P855FY/2UftZlRFF4qYQ/c2CJOk8i4IcIVwIr6BD9fguH+B5YXoWx5WM35DAyncKHHaHTm8SCy/8F7tadKf21wdJd7LO0IqpVBQChHxFR4cdwSTK97EXuevDs1VWzGIstsSZICMv1mLXo3fxt1a3em1A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Some filesystems want to be able to ensure that folios that are added to the page cache are at least a certain size. Add mapping_set_folio_min_order() to allow this level of control. Signed-off-by: Matthew Wilcox (Oracle) Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Hannes Reinecke --- include/linux/pagemap.h | 100 ++++++++++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2df35e65557d..fc8eb9c94e9c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -202,13 +202,18 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ - AS_STABLE_WRITES, /* must wait for writeback before modifying + AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */ + AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ + AS_FOLIO_ORDER_MIN = 8, + AS_FOLIO_ORDER_MAX = 13, /* Bit 8-17 are used for FOLIO_ORDER */ + AS_UNMOVABLE = 18, /* The mapping cannot be moved, ever */ }; +#define AS_FOLIO_ORDER_MIN_MASK 0x00001f00 +#define AS_FOLIO_ORDER_MAX_MASK 0x0003e000 +#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK) + /** * mapping_set_error - record a writeback error in the address_space * @mapping: the mapping in which an error should be set @@ -344,9 +349,47 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) m->gfp_mask = mask; } +/* + * There are some parts of the kernel which assume that PMD entries + * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, + * limit the maximum allocation order to PMD size. I'm not aware of any + * assumptions about maximum order if THP are disabled, but 8 seems like + * a good order (that's 1MB if you're using 4kB pages) + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER +#else +#define MAX_PAGECACHE_ORDER 8 +#endif + +/* + * mapping_set_folio_min_order() - Set the minimum folio order + * @mapping: The address_space. + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * + * The filesystem should call this function in its inode constructor to + * indicate which base size of folio the VFS can use to cache the contents + * of the file. This should only be used if the filesystem needs special + * handling of folio sizes (ie there is something the core cannot know). + * Do not tune it based on, eg, i_size. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_folio_min_order(struct address_space *mapping, + unsigned int min) +{ + if (min > MAX_PAGECACHE_ORDER) + min = MAX_PAGECACHE_ORDER; + + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | + (min << AS_FOLIO_ORDER_MIN) | + (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX); +} + /** * mapping_set_large_folios() - Indicate the file supports large folios. - * @mapping: The file. + * @mapping: The address_space. * * The filesystem should call this function in its inode constructor to * indicate that the VFS can use large folios to cache the contents of @@ -357,7 +400,37 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + mapping_set_folio_min_order(mapping, 0); +} + +static inline unsigned int mapping_max_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; +} + +static inline unsigned int mapping_min_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; +} + +static inline unsigned long mapping_min_folio_nrpages(struct address_space *mapping) +{ + return 1UL << mapping_min_folio_order(mapping); +} + +/** + * mapping_align_start_index() - Align starting index based on the min + * folio order of the page cache. + * @mapping: The address_space. + * + * Ensure the index used is aligned to the minimum folio order when adding + * new folios to the page cache by rounding down to the nearest minimum + * folio number of pages. + */ +static inline pgoff_t mapping_align_start_index(struct address_space *mapping, + pgoff_t index) +{ + return round_down(index, mapping_min_folio_nrpages(mapping)); } /* @@ -367,7 +440,7 @@ static inline void mapping_set_large_folios(struct address_space *mapping) static inline bool mapping_large_folio_support(struct address_space *mapping) { return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + (mapping_max_folio_order(mapping) > 0); } static inline int filemap_nr_thps(struct address_space *mapping) @@ -528,19 +601,6 @@ static inline void *detach_page_private(struct page *page) return folio_detach_private(page_folio(page)); } -/* - * There are some parts of the kernel which assume that PMD entries - * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, - * limit the maximum allocation order to PMD size. I'm not aware of any - * assumptions about maximum order if THP are disabled, but 8 seems like - * a good order (that's 1MB if you're using 4kB pages) - */ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER -#else -#define MAX_PAGECACHE_ORDER 8 -#endif - #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); #else From patchwork Wed Mar 13 17:02:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A705DC54E66 for ; Wed, 13 Mar 2024 17:03:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD2E280044; Wed, 13 Mar 2024 13:03:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5E2F940010; Wed, 13 Mar 2024 13:03:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD4A880044; Wed, 13 Mar 2024 13:03:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 97B3F940010 for ; Wed, 13 Mar 2024 13:03:14 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E952B1209CA for ; Wed, 13 Mar 2024 17:03:13 +0000 (UTC) X-FDA: 81892636266.13.EC779D7 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf24.hostedemail.com (Postfix) with ESMTP id 3D4FB18003C for ; Wed, 13 Mar 2024 17:03:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=CmOymcqE; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf24.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349392; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wszp5hbUcP6JMNVNZOwpP5JGK1SwphDd9Wp1MxuYftg=; b=MjIbVTsP2jiOVj3pXwxqA/G/UaEy2UxbZQufILOS+9IhDfVmCvXkmygRNp912RExtFcTUv clJsIxOIL6xTWpGurPpVu4pAPseJi8eqPcXQFdzGeQ5jqYcmHkqko0kciYB197P228SMqG c549Z6ymfwHAPD3/sU1Q1n+RUdmj9Sg= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=CmOymcqE; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf24.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349392; a=rsa-sha256; cv=none; b=aZIe/NiHCxFM31++mrYeFbOkXjs8BtfGwXLwrhy2fasvJd7jiLOBqtc2acJbxuwUUzWo+U EUaIcb19vXXTIDDZ23eC6w7DFaWakKApT1Q99b5UWloaR8Se3+uHzLGc0xX3P5mHZPjcFZ tLLkAmYpWL8+b4+KCVjSj8hrbvCMxYo= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4TvxfN6wJZz9sZ1; Wed, 13 Mar 2024 18:03:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349389; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wszp5hbUcP6JMNVNZOwpP5JGK1SwphDd9Wp1MxuYftg=; b=CmOymcqEA2k7F3fSEc2F0iZrVfdQjpS+vB/8mbuQc6OVEXK4IZaNdr26Mdt8g6AgFWGu+S /CcsBxKTvJT1TRyRh9A+aRMVGD1fy09gUUkNaLWdRLnQagNJEzLSuytM6bk+uf2+2uPZHR NozFQhS3e1i2r52JNALlZ8dxSL3OCSeTI8onadSfCy/V9uFlGUDynvxiSZ2ppEkNFvpUPE Re+YMOHOHJ8UaC/qsTcVZpYi69oAnc3A2wY4giR8dC7ieBM6FCviYzdlNCnz61gnqUlo01 v7XayFjgxuRzrlBX26Qau+MsdV8EI6YqAd1ZHFsgL4mfHVSEACXlboqLcp4Y9A== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 03/11] filemap: allocate mapping_min_order folios in the page cache Date: Wed, 13 Mar 2024 18:02:45 +0100 Message-ID: <20240313170253.2324812-4-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3D4FB18003C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: bnyko14m3t38q5f1utn9ac5tu36dqbof X-HE-Tag: 1710349391-491015 X-HE-Meta: U2FsdGVkX19jtb/F8CQ/AEN308Mjm+4icWVspG9aHI0ULA34kqK4cOiCCWEAYfO6pcGamoe0aRvORhYgUQw1yJ15hYS4eL51eU1ZNxkVyr9GIMxc8JlnA3kJTzMO5bN97hzbgzFBFpPYeP7NjnOZar+3Cu6npZde5JSVnNLLcaHydcBX+e/WCPJ9kCMBlKJO7/5RcUvwSmMInctJy1+KqsAfgOKDjo7N52WRKYmALoOP6/5hLWvHpnHxjCXOzqpa5R7p9u3Q/WD36KfDrk6VhnPGdV10FZSY5BgLHripy1qVuLFdXsQZTTw6h8R2mtgnMY40nyGyVuX08ZaNOfOj1mxv9o49Ds5PJ/5mYCrFly8GV52G4Jry3rjSkYqRMlVgm81Vo4n8lUPK5zrcKLBg95TBNZfa6jZOcnk1kJpy4JBGLI1ntnTbZJocn2Sz9z4onmCXuqJBXQV2SiJL7KkqBf2Z7eVgUYCm8H0V7yyi7ksg5YGm8u327y0AhGhtx4yQAi3W1ReyrLnUAuMHXimQ0sHp+RN5p94ao+argynUVJAtZ0q0F35neuAHzJL/4OiksIXdV5/iowo/oukYwOieZpe+EQyVL15d5gAQkOqN6W7aIQ4wyCco3XfKh7+jPv6JA/j44VYFRYxQ1O3o4sM0KSgtOuwB6Zk/cyO0aYagxjpi1l02zeB0R1JOxX1Hd6mc9xpWC5tra6qbB10F9L2OCq+x0do8+AJl6Jyq6GuTwc1vbnwTHQZQ88Ph04d5zsnZNI0VWy4fk2c+S7OXRnIXB2hTWv5k8+is1UXZ1W2LuT7D2y3VHNY9/1fEIo4D/UmPcBT39slNhZZaFB067pmaMLaltUCyNFT263fjTeIHMnj/xH0M3V6NVXRkwsngXQR1WXYEcmtW07GMG4ddDIISxzy7dU+lx3paS+SBM6+Oihl4S6LF2377BCyvdx8Z8tE7+YBOPxMU0QBe4WW5Mig gDkCY9K7 0BAgAoHzs7cb8c0LbLpMonpwMC74iplOHoTRCRtx5Inn7zdpOfZsepNSin1QyvnmL3u9RnGOt6C/YU7caN+5TWH2b/ZTnPtRaAhnhOohIZTxFqQ15NzLgmEV/ue9CDq7FEwHDkfYthRsKWp4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Luis Chamberlain filemap_create_folio() and do_read_cache_folio() were always allocating folio of order 0. __filemap_get_folio was trying to allocate higher order folios when fgp_flags had higher order hint set but it will default to order 0 folio if higher order memory allocation fails. Supporting mapping_min_order implies that we guarantee each folio in the page cache has at least an order of mapping_min_order. When adding new folios to the page cache we must also ensure the index used is aligned to the mapping_min_order as the page cache requires the index to be aligned to the order of the folio. Signed-off-by: Luis Chamberlain Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav --- mm/filemap.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index a1cb3ea55fb6..57889f206829 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -849,6 +849,8 @@ noinline int __filemap_add_folio(struct address_space *mapping, VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping), + folio); mapping_set_update(&xas, mapping); if (!huge) { @@ -1886,8 +1888,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, folio_wait_stable(folio); no_page: if (!folio && (fgp_flags & FGP_CREAT)) { - unsigned order = FGF_GET_ORDER(fgp_flags); + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags)); int err; + index = mapping_align_start_index(mapping, index); if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping)) gfp |= __GFP_WRITE; @@ -1927,7 +1931,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, break; folio_put(folio); folio = NULL; - } while (order-- > 0); + } while (order-- > min_order); if (err == -EEXIST) goto repeat; @@ -2416,13 +2420,16 @@ static int filemap_update_page(struct kiocb *iocb, } static int filemap_create_folio(struct file *file, - struct address_space *mapping, pgoff_t index, + struct address_space *mapping, loff_t pos, struct folio_batch *fbatch) { struct folio *folio; int error; + unsigned int min_order = mapping_min_folio_order(mapping); + pgoff_t index; - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0); + folio = filemap_alloc_folio(mapping_gfp_mask(mapping), + min_order); if (!folio) return -ENOMEM; @@ -2440,6 +2447,8 @@ static int filemap_create_folio(struct file *file, * well to keep locking rules simple. */ filemap_invalidate_lock_shared(mapping); + /* index in PAGE units but aligned to min_order number of pages. */ + index = (pos >> (PAGE_SHIFT + min_order)) << min_order; error = filemap_add_folio(mapping, folio, index, mapping_gfp_constraint(mapping, GFP_KERNEL)); if (error == -EEXIST) @@ -2500,8 +2509,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, if (!folio_batch_count(fbatch)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ)) return -EAGAIN; - err = filemap_create_folio(filp, mapping, - iocb->ki_pos >> PAGE_SHIFT, fbatch); + err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch); if (err == AOP_TRUNCATED_PAGE) goto retry; return err; @@ -3662,9 +3670,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping, repeat: folio = filemap_get_folio(mapping, index); if (IS_ERR(folio)) { - folio = filemap_alloc_folio(gfp, 0); + folio = filemap_alloc_folio(gfp, + mapping_min_folio_order(mapping)); if (!folio) return ERR_PTR(-ENOMEM); + index = mapping_align_start_index(mapping, index); err = filemap_add_folio(mapping, folio, index, gfp); if (unlikely(err)) { folio_put(folio); From patchwork Wed Mar 13 17:02:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F54FC54E66 for ; Wed, 13 Mar 2024 17:03:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA6E480046; Wed, 13 Mar 2024 13:03:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E57BD940010; Wed, 13 Mar 2024 13:03:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF80F80046; Wed, 13 Mar 2024 13:03:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BD8CF940010 for ; Wed, 13 Mar 2024 13:03:25 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9113E140432 for ; Wed, 13 Mar 2024 17:03:25 +0000 (UTC) X-FDA: 81892636770.19.56B7470 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf03.hostedemail.com (Postfix) with ESMTP id EF64C2003B for ; Wed, 13 Mar 2024 17:03:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=iLuEZtwu; spf=pass (imf03.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349400; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LlktHRrCotj7h+dmYranbD+6rKP22De/aSKpvLU15xI=; b=XpLIHIPJy4YsAxtlTz10vI17OjBUlaqd0j1q36c/RD2+B5Ll7mmwJEZgtQM1aPHIZEaT6m mq2Z5TjGFFkVrHsxFpHeskDn1z18WPNvbqUuYlgeQEsoIF/4oImznatk5wTFx7ZsbOieSD wASfIImaI/05S/XOUDTD3DbSV4lM4GA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349400; a=rsa-sha256; cv=none; b=JMjPhan/NihQyqHX8il4FBr0a8bbWrRZ+eD44JmI3RtIzn4KelFxKD9Nt0T5yGCW+3D1bb pXqZD6Xk5dO7tO8p3mPVS0Hef9SBt/l/eelnsgJJbhC7h7cM3Sk3/Hoq/iycRs5crkCBos uAwuDclE6iFgJ4dTfysTX6z/xmCgW3o= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=iLuEZtwu; spf=pass (imf03.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4TvxfX4xPNz9sv2; Wed, 13 Mar 2024 18:03:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349396; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LlktHRrCotj7h+dmYranbD+6rKP22De/aSKpvLU15xI=; b=iLuEZtwukGR8j/wP2yTR5lqu8w1dn1Om8H+h1Jrn/yb2Op1vTScowf81qXdWfFwhx03M/k lQ/ljugRNgFKv69Zpc6GDcMiFT9VCHupZ6ValiBdiFfuWsmov6aNUmcQiDMialZfyaybui 8T59W5m15KIJ0u/Ty4jAsUYT9GrTgDUpkTktfRWjiysbyXYSygMpuhFGLRtvAF3XUnEXNb PjoKw2atBz6K73ROKH0xNo9tEt25+B4heTtlXX02+5MGRYL4KHK5fIQB8ASFf3iPeRzqLB qWHeD/50YVeH/RmWQXjuSiVGmXs3wQCjp0KP/PxP8pUf+qiTnxODf+IHoVquZQ== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded() Date: Wed, 13 Mar 2024 18:02:46 +0100 Message-ID: <20240313170253.2324812-5-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Stat-Signature: bgujofrdz41x11cp3nwjjweo8esq7kzm X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EF64C2003B X-Rspam-User: X-HE-Tag: 1710349399-262072 X-HE-Meta: U2FsdGVkX1+83YSXOH3jlf/yfRCYwkwUpOGXcEsP/eSxRGEAmVK9qoytH0AmeHUWQVirV7rJi7CGI3O29Ie8mWIwVr5efytYQJwKEeHVGQ6Jovc3JYBWTnCdSoEwEv44a7mzWFkWXx7etKCQdsUiplrZmYNbyXz1fgOwQ0U8jw1NK9AlakfZzlUxH7A7abQDldd/IANXOXwmI+YElcpgcFIT30iOFhSgd8MWSHeVw4R4X81oaFgzEf3UTpc1FJJh0mkKo2ulDmfbmPAngcW86wLnNxqEGf+lOJ7TqTCYRmajx4WNWPXh8PC+x1g6vIo/JNeY5nmMClqCsTtkUOdP53zJa4MCcUKErM2UJJNOpoj9UtJPJcF0E0cEGIxT8ZSP5ZXoxCdCsWzArPDE9oAKl7ZWaZEcXhkqgCJSQIO06dJN/Gp59DXF7w5Vci0HFSQ4VQx2HgNN+VERjopBHkRLvai4Qxiw4Kp11CSzj2z9rXQNlOlw4ep1Y28gBdsfYrcCU7KcZqBV6FPHN2nCCShpiKUj0i/k3COZIqmpxMdKvktJ4EJLJvsInpEu12Y1bxxjZvgRbjDdNkC4PpF3iFh7CpXAx3f34E1cJfGj7FwiGnQIjxM9Js8j/ZP7JpHubHkqonuINqwFOg3RAHh9pr/MG6hMbVjpyzsFIVYliJQQE5RmHBixRg9wFM0lqHmIKCWfbNZxtiUi9rfPHBeogYZdolbf5a/5lZtV+/6h/ECxzEtlcPVrkrZT7C0L1jBy5Tnt/ItfGQnP8sic2oDVHUHjZ7hdcF2eXajIQU6Pj1eIZB6IPBKQcr3VHcGoKtLGP/pUAcgfr6RZkKoaRtznmgSsAmkcd5nW8ErFdWQSPGQ1jx0cUCWY/+ZwC/6YYzoMOy8U/GBqp9W904Jdv3h/GqvQkYPVWGb+N8iIa4HZqiXU9NuOAoPedYlNi/Kgc1xpuIDm3CzCRHvCzQEI7mA2U7Y YiXl9h8r 1dUNn7UT1sehZqcOZ2fjuE+yg2M5bVuwAqDvakgjseHtAPaXBg33f8vUWvTd6qtDbiA2/5P71Aa/gnbjAGdLPBg2TZ0UKHGvmFKyCJovE0RZRMgOdj9tzkrHmyHcjgwbnIckT9HPjMrg804RxBfrDk6JDbevEIpgqarRZyfsqyXfJqYwkkHbcgqPQao99Jhm5suixvELshhECdx7IkDPulTbxAAEGSkR+aX+ETNQ821gVmrwp197MrcFTuf8p6fqlQzKJL6q0DVN8WogeTROFpYIVR6AcgGmJfE0aC5V+dniMyZIbi3cncjly+Aqs0uwt3fEyMgi5bX39N/8mSlKNiqJVFAQdr07tpqDvA+Pie2ob4/fxizbEyPfE0vOQIuSGdf0lOqP2eZyWYz9tDpmNrtACm3Q3xmzWTdqi6GtXYAJP5zR2CVK1q1a4O1LgjRpL9NNgoiwG4Z95HCC00ot8WYYqxZ3y6bXcUd39uVmW99x7nuF3OZF7Us+eV238JT6oergynqPnjxdcQup6r7IfzJhG8/5UNTSVEJZxoqesT+jflcK6KSrb3fMa+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Hannes Reinecke Rework the loop in page_cache_ra_unbounded() to advance with the number of pages in a folio instead of just one page at a time. Signed-off-by: Hannes Reinecke Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Acked-by: Darrick J. Wong --- mm/readahead.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 369c70e2be42..37b938f4b54f 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -208,7 +208,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, struct address_space *mapping = ractl->mapping; unsigned long index = readahead_index(ractl); gfp_t gfp_mask = readahead_gfp_mask(mapping); - unsigned long i; + unsigned long i = 0; /* * Partway through the readahead operation, we will have added @@ -226,7 +226,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, /* * Preallocate as many pages as we will need. */ - for (i = 0; i < nr_to_read; i++) { + while (i < nr_to_read) { struct folio *folio = xa_load(&mapping->i_pages, index + i); if (folio && !xa_is_value(folio)) { @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl); - ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + ractl->_index += folio_nr_pages(folio); + i = ractl->_index + ractl->_nr_pages - index; continue; } @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, folio_put(folio); read_pages(ractl); ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + i = ractl->_index + ractl->_nr_pages - index; continue; } if (i == nr_to_read - lookahead_size) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); - ractl->_nr_pages++; + ractl->_nr_pages += folio_nr_pages(folio); + i += folio_nr_pages(folio); } /* From patchwork Wed Mar 13 17:02:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E136FC54E66 for ; Wed, 13 Mar 2024 17:03:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B88780047; Wed, 13 Mar 2024 13:03:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6409B940010; Wed, 13 Mar 2024 13:03:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46CAB80047; Wed, 13 Mar 2024 13:03:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 33906940010 for ; Wed, 13 Mar 2024 13:03:28 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0A8A0A0955 for ; Wed, 13 Mar 2024 17:03:28 +0000 (UTC) X-FDA: 81892636896.03.519E7EE Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf10.hostedemail.com (Postfix) with ESMTP id 17037C0029 for ; Wed, 13 Mar 2024 17:03:23 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=TTj6XlQF; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf10.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349404; a=rsa-sha256; cv=none; b=QoumHVs+doSJloIua35qy8u927fmc+W0Ce1j4AaS2iZzENgm/HLBqoC68MRNisdSJCdvUe 6jNPrVAWVPcen+nDNp8u7M1nr3mGvDm2ElRa1VSAh+g4m+a5rnvzI9xZzPdKtEFGt6gJnw xjJEajPbTBQf3MdikLv34VEaX6J1tDM= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=TTj6XlQF; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf10.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349404; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RU7wAkeEZDBqdQxVOpHCiR4r3AW1LNpG/Tc03yCcuTY=; b=KIeWevMa7oRSUAuutZmECwOpLgaiOaYi9ZElyq305VCte8gVZz+Pi7Fx1ZEeMBSdtR3/z1 nQZYaPkZruA9OuyEEJm9fsyTulERkiftcwuHwjIyfbpMgTTfoI9KYlXSN27JedpIAx2L3W 2NxT0WMQMUaUB3ioNHkw1epCPBD2P0M= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4Tvxfc5Ylyz9svD; Wed, 13 Mar 2024 18:03:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349400; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RU7wAkeEZDBqdQxVOpHCiR4r3AW1LNpG/Tc03yCcuTY=; b=TTj6XlQFpuWaG2HRYwVUmyL+mGDOj1YyE4BzeSzFG1uBTNOYHvqLSLJTAqTzOXnojbUFav VaI6Qsori+supnIjfxd7QOIIRBt1r3AjdYofmkNSsGr1IzJ+kCMctOkAAwm7VlS4HYxEwd xhBIKghw4OfU+qFZghRdae1Y3kRqk8ielVKILzxzoDayVYCCt3MRCU1hTX/PN4sZvgQ33s ldT65trmEXqfRbC8rb0GhMPrB26/1wbc37xM1ESEazL9aNIsjbMNJiTV0XGU5MF3Oqr4YQ tKChz4zQjWYPgbXBOzZwEHxjNui6cK8PvepNc7H9MPPB3lVOeKaxX9IUrPkHog== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 05/11] readahead: allocate folios with mapping_min_order in readahead Date: Wed, 13 Mar 2024 18:02:47 +0100 Message-ID: <20240313170253.2324812-6-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 17037C0029 X-Stat-Signature: cc756aguesihc5sw5tahrzdpk3swxxft X-HE-Tag: 1710349403-932481 X-HE-Meta: U2FsdGVkX1/30ddLGZSJlYO5bit0LDoJwKWNsx8e4nj0c4Ae9qjDKOwuC/+vVK+VydCOeIyNkm+Xpw1/19xnIX1MsQ/9xpHPNLZw6NMOpNRx6fhH6fQE9zr/COH0deTK4Y+ly5AADv7FvWfJ1vxZ1rncrBdgkjz74j5ksSYPcd/3VgqdMgFtoUCEJ1mmUEtzWAjjrWUr8gInufDPIB3OFsvlEkulhS+3ywcX4Kk6d71nqUIPRMqEETLlTXsd1tMitaU5B6/0EiUIv5jlU01BNWoE7WgxhiH6kNPpHEwfrRZbtqmCqcU40tLZF8lE41T4ndFs2Mog4hnTo6Y5xLTNU5xSS3KKr4hn+eeS3OJidnrL/kuILZACsg0WN8xXO7neVKL0ubtK/k1JSjbqqeG4HSwm1UkZZf2aHcdXKfkRAqyC6HJ2ZyBPHlxzqvkJaGZTLJlx5De1yXRCCVoUfna8nc0JdbtaIO4kWy90Az+BVAjcwxw2rZFG9+N2YclHmoqiWhxovVj3o5nldWMKz2qWHHYvadjkwOmL1+y1BvEbEw7FPriMsgTwfoWMxsvkNSWMQscsysPR4K35uWKWrXZQitUd+nnfN0Q74/6Vc6jFFK0wmX2hILwXUfBzXQNfHtSpAfUuR4KS+wHHPv70DFS/GTYVnESL7ijTWLQW5/sZrL5SPlvPaxRryyWmM27vVxqSFzgcotXZ58vHbjeRrj9EuZnD8NE4A5OSHfeQq+OBD6i2jJmv+zapP9rf6qj6o7Vla7uinul4Re+49FueY/z1oM1DazAE14IMck6thCE83OnXiZCSf+3DTpK9BYB/TQRGufGY2yMvD62eb+muftkSKAbm6cIxFA9EwKGerJ6m1VB2XRN4IGEvh7hZxdxXq4HLVnve0otxwemOQ9W6OzHtheDDfvf2MnNCgbjFP6Rqh0H0ztfIJ5NRyvMlnaaFpQ6dbzVc+SmVH4kgFYCFqpZ DocCW7OT pe0YxocWLzCSuNgquElRVAU0O/UCSiPH666+WUvhg+kuGvbnbIQxoUyosSYt2Q7czaxyNE6guSR7qfrDNABKfZ7QMKooO5MMgr6locjjxD/MhT/shNW6KvRl7NrCNiOgeAZ3eEvKE2Ccez64= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav page_cache_ra_unbounded() was allocating single pages (0 order folios) if there was no folio found in an index. Allocate mapping_min_order folios as we need to guarantee the minimum order if it is set. When read_pages() is triggered and if a page is already present, check for truncation and move the ractl->_index by mapping_min_nrpages if that folio was truncated. This is done to ensure we keep the alignment requirement while adding a folio to the page cache. page_cache_ra_order() tries to allocate folio to the page cache with a higher order if the index aligns with that order. Modify it so that the order does not go below the min_order requirement of the page cache. When adding new folios to the page cache we must also ensure the index used is aligned to the mapping_min_order as the page cache requires the index to be aligned to the order of the folio. readahead_expand() is called from readahead aops to extend the range of the readahead so this function can assume ractl->_index to be aligned with min_order. Signed-off-by: Pankaj Raghav --- mm/readahead.c | 86 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 72 insertions(+), 14 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 37b938f4b54f..650834c033f0 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -206,9 +206,10 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, unsigned long nr_to_read, unsigned long lookahead_size) { struct address_space *mapping = ractl->mapping; - unsigned long index = readahead_index(ractl); + unsigned long index = readahead_index(ractl), ra_folio_index; gfp_t gfp_mask = readahead_gfp_mask(mapping); - unsigned long i = 0; + unsigned long i = 0, mark; + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); /* * Partway through the readahead operation, we will have added @@ -223,6 +224,22 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, unsigned int nofs = memalloc_nofs_save(); filemap_invalidate_lock_shared(mapping); + index = mapping_align_start_index(mapping, index); + + /* + * As iterator `i` is aligned to min_nrpages, round_up the + * difference between nr_to_read and lookahead_size to mark the + * index that only has lookahead or "async_region" to set the + * readahead flag. + */ + ra_folio_index = round_up(readahead_index(ractl) + nr_to_read - lookahead_size, + min_nrpages); + mark = ra_folio_index - index; + if (index != readahead_index(ractl)) { + nr_to_read += readahead_index(ractl) - index; + ractl->_index = index; + } + /* * Preallocate as many pages as we will need. */ @@ -230,6 +247,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, struct folio *folio = xa_load(&mapping->i_pages, index + i); if (folio && !xa_is_value(folio)) { + long nr_pages = folio_nr_pages(folio); + /* * Page already present? Kick off the current batch * of contiguous pages before continuing with the @@ -239,23 +258,35 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl); - ractl->_index += folio_nr_pages(folio); + + /* + * Move the ractl->_index by at least min_pages + * if the folio got truncated to respect the + * alignment constraint in the page cache. + * + */ + if (mapping != folio->mapping) + nr_pages = min_nrpages; + + VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio); + ractl->_index += nr_pages; i = ractl->_index + ractl->_nr_pages - index; continue; } - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, + mapping_min_folio_order(mapping)); if (!folio) break; if (filemap_add_folio(mapping, folio, index + i, gfp_mask) < 0) { folio_put(folio); read_pages(ractl); - ractl->_index++; + ractl->_index += min_nrpages; i = ractl->_index + ractl->_nr_pages - index; continue; } - if (i == nr_to_read - lookahead_size) + if (i == mark) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); ractl->_nr_pages += folio_nr_pages(folio); @@ -489,12 +520,18 @@ void page_cache_ra_order(struct readahead_control *ractl, { struct address_space *mapping = ractl->mapping; pgoff_t index = readahead_index(ractl); + unsigned int min_order = mapping_min_folio_order(mapping); pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; pgoff_t mark = index + ra->size - ra->async_size; int err = 0; gfp_t gfp = readahead_gfp_mask(mapping); + unsigned int min_ra_size = max(4, mapping_min_folio_nrpages(mapping)); - if (!mapping_large_folio_support(mapping) || ra->size < 4) + /* + * Fallback when size < min_nrpages as each folio should be + * at least min_nrpages anyway. + */ + if (!mapping_large_folio_support(mapping) || ra->size < min_ra_size) goto fallback; limit = min(limit, index + ra->size - 1); @@ -505,9 +542,19 @@ void page_cache_ra_order(struct readahead_control *ractl, new_order = MAX_PAGECACHE_ORDER; while ((1 << new_order) > ra->size) new_order--; + if (new_order < min_order) + new_order = min_order; } filemap_invalidate_lock_shared(mapping); + /* + * If the new_order is greater than min_order and index is + * already aligned to new_order, then this will be noop as index + * aligned to new_order should also be aligned to min_order. + */ + ractl->_index = mapping_align_start_index(mapping, index); + index = readahead_index(ractl); + while (index <= limit) { unsigned int order = new_order; @@ -515,7 +562,7 @@ void page_cache_ra_order(struct readahead_control *ractl, if (index & ((1UL << order) - 1)) order = __ffs(index); /* Don't allocate pages past EOF */ - while (index + (1UL << order) - 1 > limit) + while (order > min_order && index + (1UL << order) - 1 > limit) order--; err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) @@ -778,8 +825,15 @@ void readahead_expand(struct readahead_control *ractl, struct file_ra_state *ra = ractl->ra; pgoff_t new_index, new_nr_pages; gfp_t gfp_mask = readahead_gfp_mask(mapping); + unsigned long min_nrpages = mapping_min_folio_nrpages(mapping); + unsigned int min_order = mapping_min_folio_order(mapping); new_index = new_start / PAGE_SIZE; + /* + * Readahead code should have aligned the ractl->_index to + * min_nrpages before calling readahead aops. + */ + VM_BUG_ON(!IS_ALIGNED(ractl->_index, min_nrpages)); /* Expand the leading edge downwards */ while (ractl->_index > new_index) { @@ -789,9 +843,11 @@ void readahead_expand(struct readahead_control *ractl, if (folio && !xa_is_value(folio)) return; /* Folio apparently present */ - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, min_order); if (!folio) return; + + index = mapping_align_start_index(mapping, index); if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { folio_put(folio); return; @@ -801,7 +857,7 @@ void readahead_expand(struct readahead_control *ractl, ractl->_workingset = true; psi_memstall_enter(&ractl->_pflags); } - ractl->_nr_pages++; + ractl->_nr_pages += min_nrpages; ractl->_index = folio->index; } @@ -816,9 +872,11 @@ void readahead_expand(struct readahead_control *ractl, if (folio && !xa_is_value(folio)) return; /* Folio apparently present */ - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, min_order); if (!folio) return; + + index = mapping_align_start_index(mapping, index); if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { folio_put(folio); return; @@ -828,10 +886,10 @@ void readahead_expand(struct readahead_control *ractl, ractl->_workingset = true; psi_memstall_enter(&ractl->_pflags); } - ractl->_nr_pages++; + ractl->_nr_pages += min_nrpages; if (ra) { - ra->size++; - ra->async_size++; + ra->size += min_nrpages; + ra->async_size += min_nrpages; } } } From patchwork Wed Mar 13 17:02:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC7E3C54E66 for ; Wed, 13 Mar 2024 17:03:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C94E80048; Wed, 13 Mar 2024 13:03:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97531940010; Wed, 13 Mar 2024 13:03:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CB9980048; Wed, 13 Mar 2024 13:03:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 62D05940010 for ; Wed, 13 Mar 2024 13:03:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 308C4A0E0E for ; Wed, 13 Mar 2024 17:03:29 +0000 (UTC) X-FDA: 81892636938.09.DE68FA3 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf19.hostedemail.com (Postfix) with ESMTP id 803EF1A002B for ; Wed, 13 Mar 2024 17:03:27 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=xM9JIwoX; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf19.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349407; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vHEsUd+FYkmqqyGjADf/C4ejcwO7DnLkDR82am0hB3M=; b=kBpM0peI9gpUrHKzVkcPriALdvEHMseX01WHFYlYXHUa4q200LUtAfUHdgJZYtYf22658q cmfAOtvyV8bYHTUZSa0QsMK9Nub4/jY2j5VJkUh9M0AZdnXAAoCkszCcpjzTWbcz6vanz4 urVYSZ5OrxQVx63vAyviLxHqUjDAeMY= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=xM9JIwoX; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf19.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349407; a=rsa-sha256; cv=none; b=T9Oz3HlC2JAcf9OJ+gJOQkNh+TZeXrCr1A2dytyfN1y38gZK67cNeMTzoZbpkRiIGSwFSv ObxHl7ncYhUYKPhybgYF1Wv2c2mMvad+DvARD/tiBPSdbnEsYCIvjNjNk6y8cMRuMTWnfo B81db3GrRpJhgM3X0EIJ7ceWr+BRrTU= Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4Tvxfh15bQz9sZ1; Wed, 13 Mar 2024 18:03:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349404; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vHEsUd+FYkmqqyGjADf/C4ejcwO7DnLkDR82am0hB3M=; b=xM9JIwoX5gfosfCIgrMHHI0lYvtdu6ZSdRS8udaibXJzPgqBiMrW7zluqYyjXMsntx79E6 2K2JqTPoFCW5ixCWR9tlqqjH1qKCyzUev3xHpZruPqOQHYPcZEKOxg8/ybdc4NAgczGzS4 7V9rgm9sX0lxn+RJdKbJpX7veZNi8RIO2ca6Eup7OaWjYAIdQZxeGzR8cRr5aDLlx9W379 0xvXMgN4k4w5usidPHXUIxwyv3at5dfBb6gkrUazd+wm7mrVE5lIkK4MCEp+J3yx8VzTUS r/b+/UPiq17+WJUtpABU496d59OseFcfLk7VMFEAyx72tw59+MsrrQajyD6jHQ== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 06/11] readahead: round up file_ra_state->ra_pages to mapping_min_nrpages Date: Wed, 13 Mar 2024 18:02:48 +0100 Message-ID: <20240313170253.2324812-7-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: hneciqbofxqitfegso3g7yuteaz7fj33 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 803EF1A002B X-HE-Tag: 1710349407-81382 X-HE-Meta: U2FsdGVkX1/W/6mHE8J1dJW/gu9f0O27PCyIgYKM3nw7xIql7KFdnWMjKdN/i8n7HXj+HSGwlXlNbH4FhfS3XcjJTfBh1P70PjJrDJEAnb/5d3dHVGMb75YdC4WBpcFKOpniasHYNqca3J41g5NMfFKpic6eWK2YEr90tdGrBf70Pp7+/jmeJpppRh/PlLREarkQv66IQIvVq2x8VbN4W8T5bAJIYglVstqvAzOHxpxOlOu4Luqw0t1iHTDKnJhsi1F/7LX3B5pvE4ReJJ9gi+EWHPGgjqsBEBnwQGH+DukZzY39Agiz5+YuGRJWlsjuG5guu9m14euyGOSnJHKxvfSzpvUbXYtct/J6Vkk8piw0gtwr80s5xXVYlfL/gLAkcf+NBa8/An6J8WeQ2BNlolb6MyLmDFUOE6hJbWs3PzQ3cufwohvIc5I50aUB66mgeSldeqeXpqfo8SeQiY10quM7L1OJosLG9Vk3IvVsIctd0YVt/8i4azSMyEv/oQZyhQoII2QInovDPK3qQCTEZB/qM4rZELN9FPO7gHg0gIO0rpa+T8kBwtAaZOIj2UB5yLOaXtNzf+9Rpr8+Vc9gmteGc0LBMJx1ENriVeDtx3//QA5eC1WasjGqgYMZuvW7XY59Clmjv18xg44EMEC2RA57SCBSx6EnYn3LXotwI1DKKcqrwwhqWtBXtYZikHP0/CIzNNYdIc042qxzviIPcnNjgiJCsUw8NQVydV/Gm7wYmzvX+q8NYUCo8EFNk2B6ljimZH92Q62JexuzcPiQMTz8hrNw3wXPTmcYwV64800vkZjmPegv3ZyBniTGfr9AU5hfrSbyRQzXvugjZYJh/ukhat7c5vjLbChXnMAcLEz5H9xklFrHl4M0XxvVViSCwUFYXzbNqNBRxozS1YFoWiGM53HCBj4SQO30EcEhjoXu+vXVTRzNy5WIxCZj3eeq5kn+fR4lkFQeShkUtaD Ws+kB28u 7DsjYqspEvqGXHeKB735qqlAgR3cqj5yVaBLP4eFNBO5nfuLzUHogzQD5CkUyDuFSKrYiDNj61FlnexiYtVPpg84ZCxIJZJL2Uohra2AIuLVs4B4XWmGeUCH36HfstLT0VrZ3JVMJA6nwkqQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Luis Chamberlain As we will be adding multiples of mapping_min_nrpages to the page cache, initialize file_ra_state->ra_pages with bdi->ra_pages rounded up to the nearest mapping_min_nrpages. Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav --- mm/readahead.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/readahead.c b/mm/readahead.c index 650834c033f0..50194fddecf1 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -138,7 +138,8 @@ void file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping) { - ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages; + ra->ra_pages = round_up(inode_to_bdi(mapping->host)->ra_pages, + mapping_min_folio_nrpages(mapping)); ra->prev_pos = -1; } EXPORT_SYMBOL_GPL(file_ra_state_init); From patchwork Wed Mar 13 17:02:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B37BC54791 for ; Wed, 13 Mar 2024 17:03:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEAD180049; Wed, 13 Mar 2024 13:03:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9A76940010; Wed, 13 Mar 2024 13:03:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EC4780049; Wed, 13 Mar 2024 13:03:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 89746940010 for ; Wed, 13 Mar 2024 13:03:32 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 61FEF40827 for ; Wed, 13 Mar 2024 17:03:32 +0000 (UTC) X-FDA: 81892637064.13.44E0116 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf06.hostedemail.com (Postfix) with ESMTP id 49B04180029 for ; Wed, 13 Mar 2024 17:03:30 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=L+OtKexa; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf06.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349410; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F3Xg5bE7lOUEiWrLgvJkawBGempm7FqinU/8gfujPB8=; b=l9fLMiH1ETHuWsNd1WFH8EmYjLj+uJ8qUJDIfaGvRS5ykA1G/PwmHpgAqh3zLY1ZahwUGZ iHeX+4IQfNdCIkHK3WQEASH77mNrmKV8JM2Pv0PkIMRIh+ud/f3BaLxjReSR9MnCWmAunK 4a49YFnTfldRkQSIHtbYv1olMeHzGRU= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=L+OtKexa; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf06.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349410; a=rsa-sha256; cv=none; b=VZp7VIZzedW1vdzI1GzX/sbZeCR8qeT6Mp80SE8Ii0H92kXdbuXl/r+Flb8tV8IoySgyX8 eustI8AsO8Gb7dvMYWmKQV3Szq3W6VBps+FLvqnCtQWTeuwXwyEjaR0EswPNHnDovDsLWl WEyAM64Cd/P/kra5DNHNm99PYilarLY= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4Tvxfl1Ftrz9sqV; Wed, 13 Mar 2024 18:03:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349407; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F3Xg5bE7lOUEiWrLgvJkawBGempm7FqinU/8gfujPB8=; b=L+OtKexa8lscvnV+tP6Qwfgscep0OokKJI+6VWJsVSvNCdYGHpjfa+oUx+M4CnU+flHItA Z7biTY7xoXAJWfjX8BUdzqizkXIbR083jNK0fC/LUb0rJ1ZBAH9+6gYFAQuNpHE9vI7/wL zbWzflaswuuxg1W3On8MRJaz9vHDwPBmneBEUngNZBU/bneNO4W9RdsLF/SuGWBcq4rFBL EDyHiJ9W22Go64+oMoK0yj/4z70enqNwtiC/3W/sgqAhc0pY2QxwX3uOcaTcmNTYsts4WV zcm5NCF1rZ91I8S3vS/p0psodGFJnSjvTeYE+LRQIBxvVmMN5SOOT2jfF38FPw== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 07/11] mm: do not split a folio if it has minimum folio order requirement Date: Wed, 13 Mar 2024 18:02:49 +0100 Message-ID: <20240313170253.2324812-8-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 49B04180029 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: yzyxxdejg3gp6nbbnhozrjnaqc9pnjzg X-HE-Tag: 1710349409-556121 X-HE-Meta: U2FsdGVkX1+PazkEdH9cJdZaEaTs7NUrOATu2IliQEQT+ViAOHDupBrGccRGxDKxhV9JJWN7y78kJRU5+LtWFllkc0W1e8D2EhuFQFh6eMpbtPy8woKTJFDhHJlvdz6hZEOSaW4Q/32umXx0u8CYyUjB9PzknrtiD6YO8alaR75HZ/0nfpLRRR72oShP+5g6KzwrlhE+XFGJQvgmcOmkqgcNAc16xB6b9apAiYHP3hNJz/fZP5BrHYiJiuVf+Ot39yDPepP+C1XHK/5dbby9N8pmikvVa7MB6jjrOZOqDyHRLRS8iUgSDc1SBvjlHggHqTbID1mQ8s/quQp6cr22GEBJQQxp9sYY+vkIYHhXDxdBw5jKCzndYTlmv/ziUb9gpX1sjythLjT2ahGRKu1rfCRtBQ1CccIzba9mal9ad2hvNoaxKL3MmSdP76Eg4qRYt4d4LmCfrPIB77xm+2itevAHdAnIHSkAfFlx8ghC0BOh8Au8k66P5xej8gB7tn0zYiSolWpilg4G2TZcb0jiL+OcIy30JnvvzmtVomfqXaJf11KhQ0WIK80whvZ59Z/fYkWvSGBGcWI6thHmGPmV4uSDGTwowwcCTK72YL8gOSFL6pwF9oUI0sQox+L9p+QTgo/QM7/tRkKrdqzJoAGpDQAfNfg7yuJ+8beJasKh7Kda9UXOM4SycDXwD9bwy3VGTsvW7BCkcTJJqatP7wSrJZMQLbaDKBXuPZ3Zqxyafxe9Z2KjNJYLZSUtltQBDzr90QC3Dg+vfhrNQSmOoRuVDGRBjHKQ5hrRdQzNXIHB85fxGRKrGmp5rzRo1J5L2j3x3bQYJVB6N/VlfCPh4Vm6MITLdSQfO132X0r7Hw/9hu+R4FfxlShP96M1i/MuF2eNB8zZH+2bmBxsm7vAV+P7yEQMsPm3MxRk/CA6Z/VEfgw06RXCPkhjangG8Ef76wqaimqi+V7AeZ5gUAWI29T ZKrG0Cf6 QXeVV4DIAjJ41UUAT3KR/N4iPu6l7d3X/juh0+Zo0XVCRsY2pF3dD/QYwlQ++Rt4vtuDJWWIm0mKNlU/WFyHS+seBiiCmt5y6EAwKC4Nk7UH2jrvJM/YCKjNlVS6e8PVpYZGiyYdH8iMJbvF6CUZAQ4yUxIPIlJHJo5n8Bvn9pdk4gzfUpAMnJ/iKLfdXDeC7ilDBCLArH+ClUUjyhJijytz1kvl8W20JpEuyQ7ZNEj//HmtwhJMmV5t2Q+ivZD2dDHat5JjfE0nGEEqXsFqyQSoq9Y1zqTsCROcQ2rr6pm0mTR8LGgeWvr3dAhS5ktuZM9NVJ0JSHWIxT8yi51Fiw+aeyWUAvI2jrM3Lqw+0eUikN45V0spMOqJErJ8tWt7LAk3ZE/wf1oIH7g3JBVOgSDSTufhRv6MnEbHmbVe7UNOXNrFbzYoP1VsWEtQ9tEz4yqQ+9PfJ+aHK+x8HfK3JLMnXVd4oZ60GhSTFRnQAvywmifopZaJhRUi1FpAL46Z73R3M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav As we don't have a way to split a folio to a any given lower folio order yet, avoid splitting the folio in split_huge_page_to_list() if it has a minimum folio order requirement. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Hannes Reinecke --- mm/huge_memory.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 81fd1ba57088..6ec3417638a1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3030,6 +3030,19 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out; } + /* + * Do not split if mapping has minimum folio order + * requirement. + * + * XXX: Once we have support for splitting to any lower + * folio order, then it could be split based on the + * min_folio_order. + */ + if (mapping_min_folio_order(mapping)) { + ret = -EAGAIN; + goto out; + } + gfp = current_gfp_context(mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); From patchwork Wed Mar 13 17:02:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E72EEC54791 for ; Wed, 13 Mar 2024 17:03:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C61A48004A; Wed, 13 Mar 2024 13:03:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0E81940010; Wed, 13 Mar 2024 13:03:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A877A8004A; Wed, 13 Mar 2024 13:03:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 931A8940010 for ; Wed, 13 Mar 2024 13:03:35 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3322F8084C for ; Wed, 13 Mar 2024 17:03:35 +0000 (UTC) X-FDA: 81892637190.18.D7ED043 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf20.hostedemail.com (Postfix) with ESMTP id 97BE91C000C for ; Wed, 13 Mar 2024 17:03:33 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=mvvw8Uxd; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf20.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349413; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CoSlfuEyGkiaGErrLwm5tBbxdIgWjqVT8F2G+alxKAU=; b=i18J/XI+cRXvukN+6UO6qz8CxaMfXchfAwibKRlvTU+qyfsVsibsczL7FILSd4yz4wYHCr AOB1FsDb6kRLW7KxbFvxujp60v5z9/HXst53KTEA9AYzpe0UTRr43oWaChhdek2ZB8f395 qTGmcnNDqdgbKJgO8T4sRBBr4uRxe6w= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=mvvw8Uxd; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf20.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349413; a=rsa-sha256; cv=none; b=MLPN3McSeely+7GlLV+QrVlqcMp92bI0Qs8mmu5p0PJLwlPf2KW6sOP2V2mzwbHigdSyj8 CK7SDAyrQE4pVtjanLQZdat2/gq6kMMPINe9Z6LGmfu5mdwSnGxER+ytULo6xTMkw9/e9n QijNlBoFgCM6pD2cTrF/aeTitcD5fC8= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4Tvxfp0XHQz9sjG; Wed, 13 Mar 2024 18:03:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349410; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CoSlfuEyGkiaGErrLwm5tBbxdIgWjqVT8F2G+alxKAU=; b=mvvw8Uxd8FI+cZwtRvS7Jb6PzVhcuKI2aUQbkfgZAXBDtjhGQc6l/aHdA/AuoMYeLf+Pha JCQRnn+aIlO10MBf3ldx/ILo5zGxFbvlXtliBuGk8I6tmj9crdNElXPe7JhhVnVL0YLvPx LcYQMAhx+gx+SIMNo/J1uYdpG21GWg6Pb7M8jLF7uAaV+FC9tnL6YC2H9w32hGf2WG4AI6 fQ1dBQ7uxhT+3f/VbS3J8HxQWXhZ1lxrpQZeNv4/66sHoQuND7xbilehbB17lzfXSqHkiD OamFwpDDrG1taP1i/YuGszkIkYVB9r3HPNQFh+JapI6Y5/q/DwMfYzwX3Z1bdQ== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 08/11] iomap: fix iomap_dio_zero() for fs bs > system page size Date: Wed, 13 Mar 2024 18:02:50 +0100 Message-ID: <20240313170253.2324812-9-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 97BE91C000C X-Stat-Signature: qcz5cuz8p3st9ukam5codje1ttsn5k3s X-HE-Tag: 1710349413-648803 X-HE-Meta: U2FsdGVkX19kqrO6Cp+hrDZiQROz/SZAxMqS/eGzZxbIogmf1v8DtTb+nsEUcCm9PfHa6MOGWecAn37oTOQlyF0k5KHtJ23/szbcNF73I0OTzt1u8BNUsZDGl+luaEhR2Lot+mgmM0bf0xVtTIcYbZrnZke3qFrVe9ZLCMpRP2p+aFfQAGJnuPw8Oc+geTx9MZDAUPfD6zrYXrjfy/+wGB1GwVbKggdxPsSn/aoJ+HP3b7iMHC/as9PvI/u/oZeBF5+O+wm7ffO8rfqh3TMHdI6EFquUKbCHccm8WhZbOefWaXDrikrd57X5Q1NQAIuK6mAnu089eSnDHp40tOJGEUOjHWx+aXCz9D1ENJqrheSOE9bnNmBb7kOboYlhg2q1G5K5IvWswOl1wkTiV/Yy3Qnu7doWHR862wjvOWjemK6fOX46ma7mj6xHFF/YFdh0a2mzkrLKqbrdLELDi5T/Mog74tXV1uqV77A+nWBwEhy4+DULWfhvSvDq7qF5n1WBLIaLUJQV0rbt6JvvkZde5YQn03uOlFVLDYaKxtqQVw+hhDdX3S2SKRvvQctkQdnndWZyd3KTUh6OMQ+B5AY1yJ5mou5x4gwUHLDak9C4++S2wkKbLX/ecINV9M8sO+GpwYgb2nxIUb4RFvwJ6cKVKWeYfl/jDrJpaThjqS87SMZ4LodVtkKEk3X7KwZoUrg9gNGHRAZxJ2ZuiiDz3YAKrafj/zvwcI2rsRd1Hsxf54Sh/j3sBJw45z2FEnlQYF/EciDVqz9NE0MkVSI6oXHSxzHoXEkG9Ywr8Um6BdKz/I9JY1zpUYsIb1BJZ80oKw9apjnbLGS8HQF8xwgoWbVGl8MNV08j1C/NO39H6mIL/pDxzWwLTUQnEOPW+mKTXLRoiVOAuQ0x5TqxHud3TbC5RxHHLc4n2zTKB8doVzhMBPqKEMdkP4Hn5b7oIzEIgzr+iAqVdZZiarSv4M6HCNM JwtkI2iO gnDDtTQ0nOr15RSw8ukEbRJL1o4Jo/UkQfrSBGLPxGWlPN5Jqva/1fQFCKej/PL12NUga91h5jwO1gU6pSEV2QCnM0e2YCyGjNwGcYpMTaV3AOYY6sieevVgetAa00QAzrARdOLwExmIld2o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav iomap_dio_zero() will pad a fs block with zeroes if the direct IO size < fs block size. iomap_dio_zero() has an implicit assumption that fs block size < page_size. This is true for most filesystems at the moment. If the block size > page size, this will send the contents of the page next to zero page(as len > PAGE_SIZE) to the underlying block device, causing FS corruption. iomap is a generic infrastructure and it should not make any assumptions about the fs block size and the page size of the system. Signed-off-by: Pankaj Raghav Reviewed-by: Darrick J. Wong --- fs/iomap/direct-io.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index bcd3f8cf5ea4..04f6c5548136 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -239,14 +239,23 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, struct page *page = ZERO_PAGE(0); struct bio *bio; - bio = iomap_dio_alloc_bio(iter, dio, 1, REQ_OP_WRITE | REQ_SYNC | REQ_IDLE); + WARN_ON_ONCE(len > (BIO_MAX_VECS * PAGE_SIZE)); + + bio = iomap_dio_alloc_bio(iter, dio, BIO_MAX_VECS, + REQ_OP_WRITE | REQ_SYNC | REQ_IDLE); fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, GFP_KERNEL); + bio->bi_iter.bi_sector = iomap_sector(&iter->iomap, pos); bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - __bio_add_page(bio, page, len, 0); + while (len) { + unsigned int io_len = min_t(unsigned int, len, PAGE_SIZE); + + __bio_add_page(bio, page, io_len, 0); + len -= io_len; + } iomap_dio_submit_bio(iter, dio, bio, pos); } From patchwork Wed Mar 13 17:02:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 316EBC54E66 for ; Wed, 13 Mar 2024 17:03:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B25A08004C; Wed, 13 Mar 2024 13:03:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A87EA940010; Wed, 13 Mar 2024 13:03:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 900F78004C; Wed, 13 Mar 2024 13:03:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 77BA4940010 for ; Wed, 13 Mar 2024 13:03:44 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4D6CBA1180 for ; Wed, 13 Mar 2024 17:03:44 +0000 (UTC) X-FDA: 81892637568.21.255F2C2 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf11.hostedemail.com (Postfix) with ESMTP id 80B6E4001A for ; Wed, 13 Mar 2024 17:03:35 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b="mZa/2rNz"; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf11.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hTTuIiANd5kCHMpVIcnhOGjU1SgMYOpCUNpWo+aN5OQ=; b=0TjCUDzT8riefGxTkWxd8HWHZ7CmAMMqM3ypJqRFEqwlbzc8/7W5K+cvCiWZn403hyAiD6 HyI5/GQl9qIyPGyfeVpQZT3y9pLGMrRx2lYPwC7osAT2klvTC51bl+6CtvnYH+MEAdPTFb BCuknx0L1QLcrUJDqwpEMFD9Zeze6ns= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b="mZa/2rNz"; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf11.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349416; a=rsa-sha256; cv=none; b=DC3eVUU79ZdKk2gY0elS5OzoIJPF0cEHGca2N4QglaUhJaGDIlR8K0H0wLYYwwbobBxCeF 8FUxGIEP033aHRuTX+BgJCONAIFn/Aazt3e7RqAL9YkOwqm9IU406qAr18QRxGo9l6hn0u SVV74kO2+UbS03Duiynt0BzxydhLIiM= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4Tvxfr5XYdz9svD; Wed, 13 Mar 2024 18:03:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349412; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hTTuIiANd5kCHMpVIcnhOGjU1SgMYOpCUNpWo+aN5OQ=; b=mZa/2rNzUk5ItRGjmpFKbIx/e9/B2IW9ObJbSGaxYPcM6/REiNCqEk1WTvXyVIEg1IRLvo diMbHNgV0MF5F7anhDmyOFyn+elkV53QF7ODdqxDx3aNbteQwUgYMBNmP0FX6xERwETv8v J09mtYXdl9rfRkqC58xp6gSWM+i40fSvzFOJP+TdAu3FZq0m5EC1qZoNbnrd7jm1YKBAHd +vribik6eOON1VvDG9qzhVhSqph3+50/mkKuElzubYMNsOZc9lddHkPGrZzxIO+BWa9plt dDJmdXsM+i/9cSpi6ibNvk1ucveXD2OXrIf9fZ21iljyuGKrUV+HA1m+kDNgfQ== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 09/11] xfs: expose block size in stat Date: Wed, 13 Mar 2024 18:02:51 +0100 Message-ID: <20240313170253.2324812-10-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 80B6E4001A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 7eu4ajc6o6kxfr4hrdxsk51t8joac7c4 X-HE-Tag: 1710349415-782677 X-HE-Meta: U2FsdGVkX18mUkUaU6AZnpVF9SiRcikqXMg8MVHND/QM5lv6ahoGkXgEyJ6y26tNeAMNgHcnCbmsLa0S3w/msIjfqrBXm5SaosTy9/MIbFwpRSkdQVuVob/t/nY0iTaXvfV1/9OR1m2E2cF+dbWuzqZARwpoD1yVGlfUjnL8Pcca15gDmLmye0073TKHNJC4DYIqYpc0GvhUWLI7fRNzvwdaf8jkunmPPi6+7eqxfBeLzNGLiiU9ISuWTMTasZZ3cVH+Xd30KeR2qgaXWlXxyEE1qJDP44Sb3a8mBa0/ICC9yUEKilYJTJspi7i3908t+vftYA6Ce2OdPRKiwrKjrwOtSST1Mn925mT1XdvoObTIsl0QLpFnAg3SuEiKUD0FbMhAxyaI7Y03QMegCOPDodz4c1b1ZBYlImeXFQvtr2xlD2usq5p6/dR4HMkOJcdUEEz3GdL4awog+mPnrjF4UVBIb3+gB+O9Abe8qK0JEeVWvL0yrULVBXkhtZUNdmMhSOUnsuHD+Q7ZwiL1b3OsLwJGsOCaoYuGy+X+fmZ+f5tL4ynYuiL2DkoHK30JaWNHt3cufMmjX4ywrZPd+ECqcSfO9fQrdAVUi5uc3qaezV8yNWmsdMQxwRjp4o4Lkx5XQVIfA8NxlXpbgcz79eDBtdmbfYv9cbs+ZZUl5i5V+2l8bEqsaK8xRvEfKuRNf7TeavDFbItAIRtWS1E/riSYW481WdIu7uwjTsxlNiy/VTpNli5f3LNknTkFdJ+fb7OoK4DWccvULQSic00PNugDKpYPIHvWG1mWqutuGzN1rgmBluChzUVIMoKpp/sqqbO+q5qpFTfSDdjHOrplh3dkXb2nGwEMOrW7jjTBWU5/wVkjynyMXaIGQ/J4oAegt5WXhQbGWetAjTcrwIH2mb7lEXHAdQcPgZ8Et2tddZlcTutXZ01P/Zp+DR42/1e5KjkfyL2c4z0U/Q14gM8x0L9 ETqF2j9i JAo9BwLOZ7By13UvRh41dUVlDr02EcZ57oCdU9+g9PAP+iudrbZaQSk+f5tZLBxbZVC2SHh59VD+Y3m4T4gZWtfdzsIpEv5ZZK0p29ux6PYPHLghamd/qJywb2cNwX0kgGQo1uLi8MwulU2HcqsC7KfRzoRJg9k3e9OZbw0x6UL2Ht/H1kGyVwtjMl8VejMutfMwle6l2KOATZsCzHq4lGIuTuGPJu6MCHwaLBnsDzZQZ2humU35SjnAuAIrcHwta9YWzCCucVLDdqm5dYCp7IsMIm6G/QGCo1JZBYcvd42qwn8R/MdJcBK1cX6lyitelNjJnvmxm0Ile68NJcl/lhPk8T3joKCiZu1HWl8ZE1nEw8Gz+UIvmJ0n0HPiOJIwYze2f+ybcLckzNqiQA+L9tNyA1SrH1T6qwNt7TXpeSZAnXbOFmq05UEsRv4K+Hnzofg264xb2aTp6kDJoe0Vu+GAP9O2XLejpAw8P3mTuhLyYfPalOxHM51PTK92Bq41XBeWPxjFdyVhuyACt6wqbxwfBx9GFh4MnBdoYGZXyX5NrS/avv7GBtRjKZA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav For block size larger than page size, the unit of efficient IO is the block size, not the page size. Leaving stat() to report PAGE_SIZE as the block size causes test programs like fsx to issue illegal ranges for operations that require block size alignment (e.g. fallocate() insert range). Hence update the preferred IO size to reflect the block size in this case. This change is based on a patch originally from Dave Chinner.[1] [1] https://lwn.net/ml/linux-fsdevel/20181107063127.3902-16-david@fromorbit.com/ Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain --- fs/xfs/xfs_iops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index a0d77f5f512e..7ee829f7d708 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -543,7 +543,7 @@ xfs_stat_blksize( return 1U << mp->m_allocsize_log; } - return PAGE_SIZE; + return max_t(uint32_t, PAGE_SIZE, mp->m_sb.sb_blocksize); } STATIC int From patchwork Wed Mar 13 17:02:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D73FC54E66 for ; Wed, 13 Mar 2024 17:03:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C5CB8004B; Wed, 13 Mar 2024 13:03:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97647940010; Wed, 13 Mar 2024 13:03:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EFD68004B; Wed, 13 Mar 2024 13:03:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6F7DC940010 for ; Wed, 13 Mar 2024 13:03:41 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 07A6E1208AE for ; Wed, 13 Mar 2024 17:03:40 +0000 (UTC) X-FDA: 81892637442.16.EFF53B0 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf21.hostedemail.com (Postfix) with ESMTP id 099DC1C001E for ; Wed, 13 Mar 2024 17:03:38 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=1zEwGmy7; spf=pass (imf21.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=me@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349419; a=rsa-sha256; cv=none; b=GmU6euz42ubwduW5eQxQYxHOpJm9ujiisAcGfeK14J5r7KrGVV0atAxGILgF5LrTmCQLk5 ht+mc/L97xO7++mPYdmPTEsPnxy8v1xKkjYkF73i8ucnVjLoEj4UbkCWMB1RGP76cYf2RO mBAhrNlfb/+fWvGzBpV0JbyCYkhFE7I= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=1zEwGmy7; spf=pass (imf21.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=me@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349419; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jHOAd9x0YRVSd/dvnN0auN+I3KtosAxaAU4ZYQzup9I=; b=v8kGc72eeB+9lZszuoFayh2DrwWSKFWA4xccTGBi6wndyK8neH4NLKFv0opjMOzvRWFLcy ZCavjgh32dVfplAzccbAdy1YE31QwT/J1GIxF/2DovGJ9qzWOPvc5obLpJ9+M3fwyhx6pw euP1caHnEh5RjZ/Ym2MA30QUWKOKNaw= Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4Tvxfv74R3z9sv7; Wed, 13 Mar 2024 18:03:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349416; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jHOAd9x0YRVSd/dvnN0auN+I3KtosAxaAU4ZYQzup9I=; b=1zEwGmy7yxhQYT3bkSq+Y8tjRwEtpGIPiHQ8HX5SqWTyU5bCefTd7tYonFqfWq0MxoazNT i1/Ype9sIPvQXhYTGeDAgZoWccjAL5Hbcf1jIb7liUIkQV/KMPw7lD+vBdoencCUyBPe1+ VV0IO2fa1GoA3/Do51YEVKmtxtA6DXma9yKGF2hRt0kYlOrbAveBmUDcOmOxein9DrCrDP qPSRtgjqeFNZcuS9xLxEs0rJ4sFa7UUBFedirfY334tlBRmQDPCL/9qk0OeJGPCtw+OoaP RxOray1t/hXKdF8nnZEEVcAHMRsZgZocGVm8E0eSVcjKNjMZVsTJ0K9WwqtvNQ== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Date: Wed, 13 Mar 2024 18:02:52 +0100 Message-ID: <20240313170253.2324812-11-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 099DC1C001E X-Stat-Signature: qxg6z3rendayfftd44hjysci1dobou8u X-Rspam-User: X-HE-Tag: 1710349418-773901 X-HE-Meta: U2FsdGVkX1+bPbDAl5/GsXdMhDobih9Vljce6dlr4WhP9E3e6YX/8cNK429LYNsqTOpbGG/jClM7ITNVl+X4ypcb6YCL9y6ACxe0PEECwM9sbH9WGEIDk4uiwqqMQVsshIqayrCdCNJ1eS8a28Wwe4KdcQYRQJ21VcJnUL7b96E+QYNbHnJBNUAYCBC/EAPHrlBxW2+Ms9F8dmjoEAlelK4uWPEoqBloRVlcP2+opquNzM3kwzSAbOMOT4HQak0wDWRyUuGoks+A2dB1FU6QvVFmIlrkmqbh44hMf75JpX2kW4G/UhcikpJcK3+sQqoFSdnYSrhQFlKqjjdOFA0R507Y5vFDj1xpHlzl8rr20tkSoifPDQ+DVShGgIkvQtVwfvZSxbcuBCLU1MNrQN48XXDV8Potwm1fsriZlor0YlOeCNd18miWydHtiEkZ4EowRHXhVRijMpgYEkdKQvyN3m5aSeCWPOGVwpgd5SnKOO4xsyh8kNSxWAEbOisuTIBjGhmjqYIg3r+3BK8GVCRPfbUVITZoXngIS82aCwXRuqrSwqRJHfpelWG700o2QgNP+Cob06LDGWHiIGq5IHUcK4Cy7JTl6mO8XXAzkIjjyyK2egO7kCUrN9qGZlsafFn5mP+M7dwc3s8QUUmswv2STrLyAu/fbzabl5D6rkLsfeEEnUr5EyPWWOMLz/FRpUVeZB/TooyT+IUXO+++LqPej8MjFcMU0APqnMUKpL8lOg9GfYqLF0XeO3w+u719IU1CfG7SRDarckZfOHkQz8JlvWdbItBzEN/HE/QajR1nNl+2oERn1f1H0SebNvSXmMVmBpDrERLpR3npixr6r+zY+TfVsqiWncnjsZlWYkpV6qwFAy/2BCkVWd3uZT8MXIkkb7ZWCDOxpjXMI70JlUMhGvdErYtDZJbtcTx0oPj/1NmGX9cI6fqEPbZjgPbaG1BMdMuhHuw8Lu2uSsOIL0o 7O/zKV/6 ynFKN8vdho7BEdwMJde8wm0YObA2uwVx5yG5IMN50/ku1N1M90m6uJ2p+O/YfHfcQK2sN4cu6S3mP8ujnCkAOrDpA9ewMNJoN//t8IUZ9iolU7kBF65H6bwuyYcsV4STSi5GPIPbQT2QI2d5e4oXGETNgNQ9SvveXee27tbV8CU+hrKQHRqPIRV50cdvQjrDz6L8X/yG9ccMTpaGzB/XD6hHp10ZxqLjXTt/I9gaBB+WYNkUzQH30vjmLDEQLOmCWjSapwyZGqE8Oo3+JFx3Hg1MT7H1aVZ9YHhq7BHvBqCKn/CujEkaS6G2N5JLTdTeV4DiMLgoMRIV2+RFaKvEkBkrrLAmn130Opap+Q1bJp+bdWq6wmk5KcWL8+izAaBzt9Ld9s/x9HTQpt64orG+G7TEurQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000013, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Instead of assuming that PAGE_SHIFT is always higher than the blocklog, make the calculation generic so that page cache count can be calculated correctly for LBS. Signed-off-by: Pankaj Raghav --- fs/xfs/xfs_mount.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index aabb25dc3efa..9cf800586da7 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -133,9 +133,16 @@ xfs_sb_validate_fsb_count( { ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); + uint64_t max_index; + uint64_t max_bytes; + + if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) + return -EFBIG; /* Limited by ULONG_MAX of page cache index */ - if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX) + max_index = max_bytes >> PAGE_SHIFT; + + if (max_index > ULONG_MAX) return -EFBIG; return 0; } From patchwork Wed Mar 13 17:02:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13591608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CDB5C54E66 for ; Wed, 13 Mar 2024 17:03:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55A248004D; Wed, 13 Mar 2024 13:03:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E266940010; Wed, 13 Mar 2024 13:03:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E8808004D; Wed, 13 Mar 2024 13:03:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 14684940010 for ; Wed, 13 Mar 2024 13:03:46 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D788E140B23 for ; Wed, 13 Mar 2024 17:03:45 +0000 (UTC) X-FDA: 81892637610.27.753ECEA Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by imf13.hostedemail.com (Postfix) with ESMTP id D4ABA2002C for ; Wed, 13 Mar 2024 17:03:42 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=Qsht05+U; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf13.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710349423; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=K/DUVmmSdMYXfogG3VkAiYhU6qYgNVi5gtPJ5i04LEI=; b=1Q0bzqbLqwU8YJWT2QXHhiHYOGSZoAv9KPkb96hSfIfrsVZQN7VnfT8KPUqRw2mRY/Z1rK /U8z9nJzsmuDRj7M67Gvox5SC0c0/HQgMeP3Ro78OUEUzRB15IzJCQ/PG7j0gE4p8o0/cA SzYXxKRbKajLzq5xExR/0Qk2WE+FOg0= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=Qsht05+U; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf13.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710349423; a=rsa-sha256; cv=none; b=g303VBzvzRHxWdeEr9k1PO03+wPElHx6SikjHwKRc8ZwQAuH94qQS3Ejw8Ttazdx0QkfJf +UlWrZLovsbm1RLp0EKZ4tX9V6aKIv9bHj8h3akwuuJK88C2xhPvHIYXJAimoJliaIF8PH XBzUOfaPi3LT3s0dI4hAsDyUlLk4Dc4= Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4Tvxfz2mzqz9srv; Wed, 13 Mar 2024 18:03:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1710349419; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K/DUVmmSdMYXfogG3VkAiYhU6qYgNVi5gtPJ5i04LEI=; b=Qsht05+Uu/iP54oqngBXP2qLeNxikfv1Eh/QI13SRmymWufjW1qsMdPY+89rtgm6i4g39y SWW8c6kh0MkZTAY9/vDjZV0NUuGdfEDYIvP8SiXji0DUHluzyiOXftszOjn9q5H1LKy/1C Wmuim/RYXM+z/vXQclsx99izmsENZc4h3Rx1QHNx1sH/37t7Egj0nIEXHyTQVY2eGhYv6O juM46526/LJPKkmSZaGglSY7Z+d/pVduLN+qUMl02bRL2+VEd2C2B1ztr6Pz8AjlF74Dwr r544zElg3fjru2Ijy1ZG+m2sSurj0ux4aO+w3OnINoA1SJvA+sNnSqIZUgG9Vg== From: "Pankaj Raghav (Samsung)" To: willy@infradead.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: gost.dev@samsung.com, chandan.babu@oracle.com, hare@suse.de, mcgrof@kernel.org, djwong@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@fromorbit.com, akpm@linux-foundation.org, Pankaj Raghav Subject: [PATCH v3 11/11] xfs: enable block size larger than page size support Date: Wed, 13 Mar 2024 18:02:53 +0100 Message-ID: <20240313170253.2324812-12-kernel@pankajraghav.com> In-Reply-To: <20240313170253.2324812-1-kernel@pankajraghav.com> References: <20240313170253.2324812-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D4ABA2002C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xgm5qz58h6e4813abz6m1p9dwjckiyzc X-HE-Tag: 1710349422-571812 X-HE-Meta: U2FsdGVkX1/biLZxlT32q6aIle9RSb7GWEDXkS4wzSWA56dAx6AOjWSiYA9LU7jjcSWdKn5oOcqCAaAR93bakjS5E4n+TM5vnRelvnqNzdLzMQrEf0KM52HQSVp/pI/oysAcjogF3xHde2SkoXISCkvyk4muDHOTKIs03+d+xvwAbwnUXuWybGVXJQ49TltJCU1lbVMPs3KGWO0KnQ7eEMRjjRC1XYkWXaTzb1VmYdQ5kIZTXaBNBNQKtn5G802soiUsmLSnob4fsgCp7W0xxS6z6hbXa42iRpeuHp4GAStnNPCFLV4rtbAkPUKCn24eDSBY6ohGpbieyB+qkk6Uc+Cl1CbgXyayhSezwv+WyDr7sZ1A+MjzSB3IxU/rotxTiIK6+wqyn2uGHQF63xmP1dABILvVGFltas3tE9nEEhr9OyU28v8XKshmGqhja6xGtU6IcMmOACqbNsnao4JIMifc9HBtEMzFYGvr4jDvYfM7DbZ0kM1lu2t19/ptmswnJsZ2SHJ/KPk81oKynq58eW/WFndz8lszojffBsCzckGnHkPvNgHGginI6jBWpoaoMWxcfYsJowDgDt0SeSNa/E/ApmmQcaG9Z/0sHJzX3TWegfDJCZXFQDy52OYkQPE/HNYKXvivrHpO1vSz9B88iovyHBqKilgUXRecKDB4r/Y5FGAa0o0pQdQzU0qBG160uAQJ7/1xe3g7vf9tovEVMOOmqPV85g6eCsSXwQTplNrP9ABz9zRjeZvWI3cfFy28Q34tkuTmU/gsCXzrYRMhGM5KsbUVenFd7hg21jZrkT3De0zOTzZJ4OrM7e9VPrQ+dmeUR4Rff8pSEPlRYHJ8A5EfH+00BIA9UJ9fQSFh5ajgtcA+rRaZquG/yHb8CzwMW4l7cbr8PRd0A7FV4W1nTAf/RJIWiHxW6Skipc37duer8ipMf3CaCK+AoCTAMDxbzJ91GyCIAySXO5kD27B hfPLPMX7 to0Dh5TOKryYU/V89/joo05Pa7x1OVcET4LXdJs+jMsoEeo9fEyP8x+m6V8iQ1pSo+1f2dmaKzgHcpV9RJzquRtFD/w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Page cache now has the ability to have a minimum order when allocating a folio which is a prerequisite to add support for block size > page size. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain --- fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ fs/xfs/libxfs/xfs_shared.h | 3 +++ fs/xfs/xfs_icache.c | 6 ++++-- fs/xfs/xfs_mount.c | 1 - fs/xfs/xfs_super.c | 10 ++-------- 5 files changed, 14 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 2361a22035b0..c040bd6271fd 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -2892,6 +2892,11 @@ xfs_ialloc_setup_geometry( igeo->ialloc_align = mp->m_dalign; else igeo->ialloc_align = 0; + + if (mp->m_sb.sb_blocksize > PAGE_SIZE) + igeo->min_folio_order = mp->m_sb.sb_blocklog - PAGE_SHIFT; + else + igeo->min_folio_order = 0; } /* Compute the location of the root directory inode that is laid out by mkfs. */ diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 4220d3584c1b..67ed406e7a81 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -188,6 +188,9 @@ struct xfs_ino_geometry { /* precomputed value for di_flags2 */ uint64_t new_diflags2; + /* minimum folio order of a page cache allocation */ + unsigned int min_folio_order; + }; #endif /* __XFS_SHARED_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index dba514a2c84d..a1857000e2cd 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -88,7 +88,8 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode or i_state! */ VFS_I(ip)->i_mode = 0; VFS_I(ip)->i_state = 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, + M_IGEO(mp)->min_folio_order); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -323,7 +324,8 @@ xfs_reinit_inode( inode->i_rdev = dev; inode->i_uid = uid; inode->i_gid = gid; - mapping_set_large_folios(inode->i_mapping); + mapping_set_folio_min_order(inode->i_mapping, + M_IGEO(mp)->min_folio_order); return error; } diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 9cf800586da7..a77e927807e5 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -131,7 +131,6 @@ xfs_sb_validate_fsb_count( xfs_sb_t *sbp, uint64_t nblocks) { - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); uint64_t max_index; uint64_t max_bytes; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 98401de832ee..4f5f4cb772d4 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1624,16 +1624,10 @@ xfs_fs_fill_super( goto out_free_sb; } - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ if (mp->m_sb.sb_blocksize > PAGE_SIZE) { xfs_warn(mp, - "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", - mp->m_sb.sb_blocksize, PAGE_SIZE); - error = -ENOSYS; - goto out_free_sb; +"EXPERIMENTAL: Filesystem with Large Block Size (%d bytes) enabled.", + mp->m_sb.sb_blocksize); } /* Ensure this filesystem fits in the page cache limits */