From patchwork Fri Sep 15 18:38:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2272BEED624 for ; Fri, 15 Sep 2023 18:41:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236691AbjIOSk7 (ORCPT ); Fri, 15 Sep 2023 14:40:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236882AbjIOSkz (ORCPT ); Fri, 15 Sep 2023 14:40:55 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B906422A; Fri, 15 Sep 2023 11:39:00 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNHz3428z9sSS; Fri, 15 Sep 2023 20:38:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803135; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pCLutTTx1nXzynufX5Rc0GaxeJJRiIKE41sPKA97GlA=; b=hsUb3KY4XvzrJ/umA8hy7LoJ99BD3KFTp3A7geTDr8UuQPQSYjMxDoznaGguaLUp7+zDIb SpPpJCMfaXeSHycMiA5SkGcdhUnKdR3h+9BxRRTe16EO607wdLjOeBmcoXsuLn52tuqvDv 6vmC+15VxzBBalR61EhCSDFSmIsNKNLAGWw8wrAVsAyDjZWOJY6PUd+mtQyjsROOAYhLGS NBJ3482DeuVAANzY1Xw0ipSZdmdT2UWd+jwufpsrPntaDwop5mTrOij6aXocMWbBhTaL3+ hzIlOi0fN7bzc6CO2LU+jTDYY2GYSV06QMUd7bZZKQ4eRqAmRVXDL0y/e0ivFQ== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 01/23] fs: Allow fine-grained control of folio sizes Date: Fri, 15 Sep 2023 20:38:26 +0200 Message-Id: <20230915183848.1018717-2-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Some filesystems want to be able to limit the maximum size of folios, and some want to be able to ensure that folios are at least a certain size. Add mapping_set_folio_orders() to allow this level of control. The max folio order parameter is ignored and it is always set to MAX_PAGECACHE_ORDER. [Pankaj]: added mapping_min_folio_order(), changed MAX_MASK to 0x0003e000 Signed-off-by: Pankaj Raghav [mcgrof: rebase in light of "mm, netfs, fscache: stop read optimisation when folio removed from pagecache" which adds AS_RELEASE_ALWAYS] Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 78 +++++++++++++++++++++++++++++++---------- 1 file changed, 60 insertions(+), 18 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 759b29d9a69a..d2b5308cc59e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -202,10 +202,16 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ + AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */ + AS_FOLIO_ORDER_MIN = 8, + AS_FOLIO_ORDER_MAX = 13, + /* 8-17 are used for FOLIO_ORDER */ }; +#define AS_FOLIO_ORDER_MIN_MASK 0x00001f00 +#define AS_FOLIO_ORDER_MAX_MASK 0x0003e000 +#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK) + /** * mapping_set_error - record a writeback error in the address_space * @mapping: the mapping in which an error should be set @@ -310,6 +316,46 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) m->gfp_mask = mask; } +/* + * There are some parts of the kernel which assume that PMD entries + * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, + * limit the maximum allocation order to PMD size. I'm not aware of any + * assumptions about maximum order if THP are disabled, but 8 seems like + * a good order (that's 1MB if you're using 4kB pages) + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER +#else +#define MAX_PAGECACHE_ORDER 8 +#endif + +/* + * mapping_set_folio_orders() - Set the range of folio sizes supported. + * @mapping: The file. + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * @max: Maximum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * + * The filesystem should call this function in its inode constructor to + * indicate which sizes of folio the VFS can use to cache the contents + * of the file. This should only be used if the filesystem needs special + * handling of folio sizes (ie there is something the core cannot know). + * Do not tune it based on, eg, i_size. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_folio_orders(struct address_space *mapping, + unsigned int min, unsigned int max) +{ + /* + * XXX: max is ignored as only minimum folio order is supported + * currently. + */ + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | + (min << AS_FOLIO_ORDER_MIN) | + (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX); +} + /** * mapping_set_large_folios() - Indicate the file supports large folios. * @mapping: The file. @@ -323,7 +369,17 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + mapping_set_folio_orders(mapping, 0, MAX_PAGECACHE_ORDER); +} + +static inline unsigned int mapping_max_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; +} + +static inline unsigned int mapping_min_folio_order(struct address_space *mapping) +{ + return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } /* @@ -332,8 +388,7 @@ static inline void mapping_set_large_folios(struct address_space *mapping) */ static inline bool mapping_large_folio_support(struct address_space *mapping) { - return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + return mapping_max_folio_order(mapping) > 0; } static inline int filemap_nr_thps(struct address_space *mapping) @@ -494,19 +549,6 @@ static inline void *detach_page_private(struct page *page) return folio_detach_private(page_folio(page)); } -/* - * There are some parts of the kernel which assume that PMD entries - * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, - * limit the maximum allocation order to PMD size. I'm not aware of any - * assumptions about maximum order if THP are disabled, but 8 seems like - * a good order (that's 1MB if you're using 4kB pages) - */ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER -#else -#define MAX_PAGECACHE_ORDER 8 -#endif - #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); #else From patchwork Fri Sep 15 18:38:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3600EEED625 for ; Fri, 15 Sep 2023 18:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236621AbjIOSmj (ORCPT ); Fri, 15 Sep 2023 14:42:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237003AbjIOSmY (ORCPT ); Fri, 15 Sep 2023 14:42:24 -0400 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 768214680; Fri, 15 Sep 2023 11:39:05 -0700 (PDT) Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4RnNJ229ddz9sb6; Fri, 15 Sep 2023 20:38:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803138; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=brBNMMPFZ5PlP6sQBgu5Kl4G5NsTW4oPgHcishpd4rM=; b=H7FtLCnsLxyZnEESEUmlYWwS3HUNf7y+ztfA2o6k0H9UD8DCi3iwUcQnp3b4jG22OqxQbD QFM0/lErD72ozGWznO0iOZsF/atrGWdf8N0u0R3Hkq8bk4gUT8aW8HcT+jRBFgWwrxd5Uu Vus+AuRm5kbgPF1yfrMOE6cGiK4o8d/jv+ywz+Xqn4cOuOZ1fLpob2vRdUi+oU+M3pmIxD 7LGMNt7s1tjXR7rXizanhCgb+FofT9t8HGdpJGNM+Kvt4gFJP8E3aU4RSYvWNgOL3dM2Cw rRU9mtKpZUVYEbkvs1/Q3spYJvwdrLhTQ+XZxP+V81DGW/fDC1ukMiA9Fhc+fQ== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 02/23] pagemap: use mapping_min_order in fgf_set_order() Date: Fri, 15 Sep 2023 20:38:27 +0200 Message-Id: <20230915183848.1018717-3-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav fgf_set_order() encodes optimal order in fgp flags. Set it to at least mapping_min_order from the page cache. Default to the old behaviour if min_order is not set. Signed-off-by: Pankaj Raghav --- fs/iomap/buffered-io.c | 2 +- include/linux/pagemap.h | 9 +++++---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ae8673ce08b1..d4613fd550c4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -549,7 +549,7 @@ struct folio *iomap_get_folio(struct iomap_iter *iter, loff_t pos, size_t len) if (iter->flags & IOMAP_NOWAIT) fgp |= FGP_NOWAIT; - fgp |= fgf_set_order(len); + fgp |= fgf_set_order(iter->inode->i_mapping, len); return __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT, fgp, mapping_gfp_mask(iter->inode->i_mapping)); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d2b5308cc59e..5d392366420a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -620,6 +620,7 @@ typedef unsigned int __bitwise fgf_t; /** * fgf_set_order - Encode a length in the fgf_t flags. + * @mapping: address_space struct from the inode * @size: The suggested size of the folio to create. * * The caller of __filemap_get_folio() can use this to suggest a preferred @@ -629,13 +630,13 @@ typedef unsigned int __bitwise fgf_t; * due to alignment constraints, memory pressure, or the presence of * other folios at nearby indices. */ -static inline fgf_t fgf_set_order(size_t size) +static inline fgf_t fgf_set_order(struct address_space *mapping, size_t size) { unsigned int shift = ilog2(size); + unsigned int min_order = mapping_min_folio_order(mapping); + int order = max(min_order, shift - PAGE_SHIFT); - if (shift <= PAGE_SHIFT) - return 0; - return (__force fgf_t)((shift - PAGE_SHIFT) << 26); + return (__force fgf_t)((order) << 26); } void *filemap_get_entry(struct address_space *mapping, pgoff_t index); From patchwork Fri Sep 15 18:38:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06501EED61E for ; Fri, 15 Sep 2023 18:41:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236557AbjIOSla (ORCPT ); Fri, 15 Sep 2023 14:41:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236807AbjIOSlG (ORCPT ); Fri, 15 Sep 2023 14:41:06 -0400 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [IPv6:2001:67c:2050:0:465::201]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D679C44B6; Fri, 15 Sep 2023 11:39:04 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4RnNJ46BJgz9sVy; Fri, 15 Sep 2023 20:39:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803140; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FLdOtec4vpzZfm3InvylkSWZf3/uIigm2EerkCxMy5w=; b=r+lyXdNhKhaZUDV6ta1NOOz4t9XIqBsPWGKdpL52UeGGpc0w2l1GrjQoagDyLq1XmHgALR g1EqfgXzxsfYSrEIgCmHV7nHfJUR65ZXBz1qPyVMU38LzQvwBAQfX8hG15HCbnG3RfUlUX iSsWS6wC7+rnNjpeSwf3O99fKY75M5jL78ggnhb2038nR0JjItKOOiGG6wCGGFRnuDZBUU NgPx6r4jz+JR36KDLUuFNvBWQsWENMcB0tcPeeRxQuUQ5ZEuJPrOFCuxxalzPD+24MFPdX zSeo8UqEgsrlR9EtOCgcPXonCPqSD7HdZB8KudYD6ZpQglZJPmDfFUMnntpKfw== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 03/23] filemap: add folio with at least mapping_min_order in __filemap_get_folio Date: Fri, 15 Sep 2023 20:38:28 +0200 Message-Id: <20230915183848.1018717-4-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJ46BJgz9sVy Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav __filemap_get_folio() with FGP_CREAT should allocate at least folio of filemap's min_order set using folio_set_mapping_orders(). A higher order folio than min_order by definition is a multiple of the min_order. If an index is aligned to an order higher than a min_order, it will also be aligned to the min order. Signed-off-by: Pankaj Raghav --- mm/filemap.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 8962d1255905..b1ce63143df5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1862,6 +1862,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, fgf_t fgp_flags, gfp_t gfp) { struct folio *folio; + int min_order = mapping_min_folio_order(mapping); + int nr_of_pages = (1U << min_order); + + index = round_down(index, nr_of_pages); repeat: folio = filemap_get_entry(mapping, index); @@ -1929,8 +1933,14 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, err = -ENOMEM; if (order == 1) order = 0; + if (order < min_order) + order = min_order; if (order > 0) alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN; + + if (min_order) + VM_BUG_ON(index & ((1UL << order) - 1)); + folio = filemap_alloc_folio(alloc_gfp, order); if (!folio) continue; @@ -1944,7 +1954,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, break; folio_put(folio); folio = NULL; - } while (order-- > 0); + } while (order-- > min_order); if (err == -EEXIST) goto repeat; From patchwork Fri Sep 15 18:38:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94293EED61F for ; Fri, 15 Sep 2023 18:43:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236708AbjIOSmd (ORCPT ); Fri, 15 Sep 2023 14:42:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236964AbjIOSmP (ORCPT ); Fri, 15 Sep 2023 14:42:15 -0400 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57E86469F; Fri, 15 Sep 2023 11:39:07 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4RnNJ73F62z9spF; Fri, 15 Sep 2023 20:39:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vFsIs+qRJRSVkUkhUlyZ9H4Jh3BsQexeaTYNPM7knQw=; b=hVgB1L3L+hfbOFs9pzt7JSZ0hL75va+f5dGTqbKq35/LHth7vuCtJyolpUYaCtZSdpXEix JXkNguxLkbblJKEYeBRWw4sugGoImquURo+WUPFARno3LgpZs20LLYLTi+D1WoWj+VLGNq uk+eLY+KxGkhRGjoDU09147fIC6cq8HBY7YH1MqbB89sTKYaddEmbOywPuL2wAm0HAG1d7 uT6Hp9zxX7OT0WFkFyl90rEJTT/PqBgWimlkD0UmghcqOvuwIJ8F+Ojn/orHfim//bDchH UtjJ+Oczct31qILqny72oU57MW1XThTi8wVKz646qcGe6HdWO6Pw8qfKMm7E8Q== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 04/23] filemap: set the order of the index in page_cache_delete_batch() Date: Fri, 15 Sep 2023 20:38:29 +0200 Message-Id: <20230915183848.1018717-5-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Similar to page_cache_delete(), call xas_set_order for non-hugetlb pages while deleting an entry from the page cache. Also put BUG_ON if the order of the folio is less than the mapping min_order. Signed-off-by: Luis Chamberlain --- mm/filemap.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index b1ce63143df5..2c47729dc8b0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -126,6 +126,7 @@ static void page_cache_delete(struct address_space *mapping, struct folio *folio, void *shadow) { + unsigned int min_order = mapping_min_folio_order(mapping); XA_STATE(xas, &mapping->i_pages, folio->index); long nr = 1; @@ -134,6 +135,7 @@ static void page_cache_delete(struct address_space *mapping, xas_set_order(&xas, folio->index, folio_order(folio)); nr = folio_nr_pages(folio); + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); xas_store(&xas, shadow); @@ -276,6 +278,7 @@ void filemap_remove_folio(struct folio *folio) static void page_cache_delete_batch(struct address_space *mapping, struct folio_batch *fbatch) { + unsigned int min_order = mapping_min_folio_order(mapping); XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index); long total_pages = 0; int i = 0; @@ -304,6 +307,11 @@ static void page_cache_delete_batch(struct address_space *mapping, WARN_ON_ONCE(!folio_test_locked(folio)); + /* hugetlb pages are represented by a single entry in the xarray */ + if (!folio_test_hugetlb(folio)) { + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); + xas_set_order(&xas, folio->index, folio_order(folio)); + } folio->mapping = NULL; /* Leave folio->index set: truncation lookup relies on it */ From patchwork Fri Sep 15 18:38:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E16AEED61A for ; Fri, 15 Sep 2023 18:41:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236548AbjIOSk6 (ORCPT ); Fri, 15 Sep 2023 14:40:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236766AbjIOSka (ORCPT ); Fri, 15 Sep 2023 14:40:30 -0400 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 350D346B7; Fri, 15 Sep 2023 11:39:09 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4RnNJB14wvz9spX; Fri, 15 Sep 2023 20:39:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v1ROluVJSO7Og0x78gpfVtBYiLqeXgpCBUoOyNe/wDk=; b=gScikhRfoSfUAACxzO+p12H84XzAf0M/1y/sXgTZwi/SgPURdSIOMvREaunNOLzkA85V5k CkYo7Lcirhjq/rxs9lLFIlbq+saLu5YE4wAsu553hUQgaAdLpnd6lWIJPRf79Do52gShSl MApjJ5VYI5gdB2GmwbCQdVfYyscSdpvjxz/UD/mMTo3yWb5Oo9lRMwslcX/s5/Dme/ByaM pvoVjRe/LuOWXOvQ4z+1rMTn2nLNkOeteypbQeBaG7pl4lV2R9pYHLCIU7ryNihhJRCQKb P/Zo3IhJskWPDN4dnUQANxUvg31hX/f2ObFgnkjEJb2nLswBC4EENX4tks9G3g== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 05/23] filemap: align index to mapping_min_order in filemap_range_has_page() Date: Fri, 15 Sep 2023 20:38:30 +0200 Message-Id: <20230915183848.1018717-6-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJB14wvz9spX Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain page cache is mapping min_folio_order aligned. Use mapping min_folio_order to align the start_byte and end_byte in filemap_range_has_page(). Signed-off-by: Luis Chamberlain --- mm/filemap.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2c47729dc8b0..4dee24b5b61c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -477,9 +477,12 @@ EXPORT_SYMBOL(filemap_flush); bool filemap_range_has_page(struct address_space *mapping, loff_t start_byte, loff_t end_byte) { + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; + pgoff_t index = round_down(start_byte >> PAGE_SHIFT, nrpages); struct folio *folio; - XA_STATE(xas, &mapping->i_pages, start_byte >> PAGE_SHIFT); - pgoff_t max = end_byte >> PAGE_SHIFT; + XA_STATE(xas, &mapping->i_pages, index); + pgoff_t max = round_down(end_byte >> PAGE_SHIFT, nrpages); if (end_byte < start_byte) return false; From patchwork Fri Sep 15 18:38:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9861EED623 for ; Fri, 15 Sep 2023 18:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236677AbjIOSk1 (ORCPT ); Fri, 15 Sep 2023 14:40:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236936AbjIOSj7 (ORCPT ); Fri, 15 Sep 2023 14:39:59 -0400 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [IPv6:2001:67c:2050:0:465::201]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC4FF49DA; Fri, 15 Sep 2023 11:39:11 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4RnNJD57K5z9sW2; Fri, 15 Sep 2023 20:39:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803148; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2grCxcqK8FKtyCgTOlZ0MTRPcri5sYlchrmngCOgl8U=; b=j/WoRm7kqZ2NoZ5OjDxKOQdGsBPUtApoPN7xyVBhTkrQsnhPqztuwiVWc2+UMXFhXmucnn yV+5CVecVrEDZ/RgmtLrE4lsA0aMvZFUc6TXMrY/E0TUv11fr40etSwDNW4JstoTE3PGdc wzKsqHxrr54nqKGxraGFN5jT4guAQfbru8I95DPokA3ewiwi5qz50a3aIhjBlV9pFFIWnV /3j3RMZ/xhGVgtsWXbHphYwPMEotMMqn+UED9KX4/9IgzyhYXmjjJeptgYB1o9oG1BDw5K SfyVH8Havcutw80kiMU8+RDv5/rYTA2MFLmQBkfN08ZgPi5O8qpzk1LIGJ1oUw== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 06/23] mm: call xas_set_order() in replace_page_cache_folio() Date: Fri, 15 Sep 2023 20:38:31 +0200 Message-Id: <20230915183848.1018717-7-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Call xas_set_order() in replace_page_cache_folio() for non hugetlb pages. Signed-off-by: Luis Chamberlain --- mm/filemap.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index 4dee24b5b61c..33de71bfa953 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -815,12 +815,14 @@ EXPORT_SYMBOL(file_write_and_wait_range); void replace_page_cache_folio(struct folio *old, struct folio *new) { struct address_space *mapping = old->mapping; + unsigned int min_order = mapping_min_folio_order(mapping); void (*free_folio)(struct folio *) = mapping->a_ops->free_folio; pgoff_t offset = old->index; XA_STATE(xas, &mapping->i_pages, offset); VM_BUG_ON_FOLIO(!folio_test_locked(old), old); VM_BUG_ON_FOLIO(!folio_test_locked(new), new); + VM_BUG_ON_FOLIO(folio_order(new) != folio_order(old), new); VM_BUG_ON_FOLIO(new->mapping, new); folio_get(new); @@ -829,6 +831,11 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) mem_cgroup_migrate(old, new); + if (!folio_test_hugetlb(new)) { + VM_BUG_ON_FOLIO(folio_order(new) < min_order, new); + xas_set_order(&xas, offset, folio_order(new)); + } + xas_lock_irq(&xas); xas_store(&xas, new); From patchwork Fri Sep 15 18:38:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9F07EED623 for ; Fri, 15 Sep 2023 18:43:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236851AbjIOSmf (ORCPT ); Fri, 15 Sep 2023 14:42:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237034AbjIOSm2 (ORCPT ); Fri, 15 Sep 2023 14:42:28 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C2AF49E3; Fri, 15 Sep 2023 11:39:12 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNJH1Xljz9sQf; Fri, 15 Sep 2023 20:39:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803151; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fo0VXOu7M+9NKfjAGLZXjFApmoEpRKKJEPK75Mez2cA=; b=WfMPs8owDY4qNpPNRRBf2P7n6I/k1Vb8R9SOGAa4d0T5FLvv4ZHDjWd9yFS4HA1DKcDXoZ Hs9/Lc7bdPIjyElTcSGwwlLTk8E9vLbEnQREiltl5xwCZxZhmZW2W/MIWqbzt/L0UBGJkL Pg1tzwwKa0ivqVeh/dlhg7mGJoM5RT/rjcmVhwnmzmeYieP7bnYJI0tIsDaZcaMFCiYk9l eQpfZrnKOiSrp4if8lwa+VZrXN/gYt7pEAXj9fNCZan1DgEdMMHsjyF5BcnH2QER2lpu5Q 2yAY1aQhY540+9mUaXvvL33VXZsMEat6/i+Ojfu5vHN2X5uVaD4JlKa3e5Z7bA== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 07/23] filemap: align the index to mapping_min_order in __filemap_add_folio() Date: Fri, 15 Sep 2023 20:38:32 +0200 Message-Id: <20230915183848.1018717-8-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Align the index to the mapping_min_order number of pages while setting the XA_STATE and xas_set_order(). Signed-off-by: Luis Chamberlain --- mm/filemap.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 33de71bfa953..15bc810bfc89 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -859,7 +859,10 @@ EXPORT_SYMBOL_GPL(replace_page_cache_folio); noinline int __filemap_add_folio(struct address_space *mapping, struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, index); + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nr_of_pages = (1U << min_order); + pgoff_t rounded_index = round_down(index, nr_of_pages); + XA_STATE(xas, &mapping->i_pages, rounded_index); int huge = folio_test_hugetlb(folio); bool charged = false; long nr = 1; @@ -875,8 +878,8 @@ noinline int __filemap_add_folio(struct address_space *mapping, charged = true; } - VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); - xas_set_order(&xas, index, folio_order(folio)); + VM_BUG_ON_FOLIO(rounded_index & (folio_nr_pages(folio) - 1), folio); + xas_set_order(&xas, rounded_index, folio_order(folio)); nr = folio_nr_pages(folio); gfp &= GFP_RECLAIM_MASK; @@ -913,6 +916,7 @@ noinline int __filemap_add_folio(struct address_space *mapping, } } + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; From patchwork Fri Sep 15 18:38:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80413EED61F for ; Fri, 15 Sep 2023 18:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236565AbjIOSk0 (ORCPT ); Fri, 15 Sep 2023 14:40:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236979AbjIOSkE (ORCPT ); Fri, 15 Sep 2023 14:40:04 -0400 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [IPv6:2001:67c:2050:0:465::201]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C863F4C06; Fri, 15 Sep 2023 11:39:20 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4RnNJL12VYz9sTm; Fri, 15 Sep 2023 20:39:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qYagWwQkQyBo9FUt+Wer66h9atojNQ2DWonnQYgzdlw=; b=sLkequQ37CkU3k8W1DxxgB42DpoCWPVkiye6O36mnozEtbvveaSpVg02AQr9gRNrsG1StT zaL/G92Pk3Q9LxMD3dVHNbwZKUF2uE0xvuy7ixhERejXQHjckMQRVs4nVmKTcZ8B4ZlwxF dilFyBmIzK3geH/C55u1tMzeAId6PQM20aaVRonwgd4u+BGaOLBKOVd7hyimLCr/FrViWj YBvOAyGNd9SQBuLQuz8bCS/+MMijxud/FLCWs8ijotAjn63u2NB1NyM8wO0y+syInk6R+Q VvQVj3aV97v3/+RLzo4tjrdpk9/BPKcdYe+8EcQcR29kNlkW4twTvSq1m+u2PQ== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 08/23] filemap: align the index to mapping_min_order in filemap_get_folios_tag() Date: Fri, 15 Sep 2023 20:38:33 +0200 Message-Id: <20230915183848.1018717-9-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Align the index to the mapping_min_order number of pages while setting the XA_STATE in filemap_get_folios_tag(). Signed-off-by: Luis Chamberlain --- mm/filemap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 15bc810bfc89..21e1341526ab 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2280,7 +2280,9 @@ EXPORT_SYMBOL(filemap_get_folios_contig); unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start, pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch) { - XA_STATE(xas, &mapping->i_pages, *start); + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; + XA_STATE(xas, &mapping->i_pages, round_down(*start, nrpages)); struct folio *folio; rcu_read_lock(); From patchwork Fri Sep 15 18:38:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C4F7EED61F for ; Fri, 15 Sep 2023 18:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236537AbjIOSuJ (ORCPT ); Fri, 15 Sep 2023 14:50:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236609AbjIOSti (ORCPT ); Fri, 15 Sep 2023 14:49:38 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B1EF3A8C; Fri, 15 Sep 2023 11:47:53 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [10.196.197.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNJQ0N8tz9sZ5; Fri, 15 Sep 2023 20:39:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803158; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3APo23DtYMF4HiSm9np03AfVKr1TDxQPEqBIBU7EQLQ=; b=Zj3MnwSGDMAwBq1bun9wQMD03apsZ8TkPzzZY0G+xVuRFHlA8jTRS9j0z03rhiuPfuRL6p sz/dnlC30Ddq95aZ8BgLQPzAOrlr7XRSaj2o2j1LWlJXmR+xzOB4D1lfcoHYkXXJNM7T+e o1eXt+BU1FmCmk3Anh7cikxoynzKwfGc8hH7UoHccj4k2DZXzQFOkU0phBCRGBrFG42T8t rlNeZG/822yhWUKUgM45Id9Xd6OxjN5FfHbNRbbHF90yh5ppnDsJruqnEsC/WrQJlyPF8N aYOfhwR5VntP4ab9n0nA+F2XRQXfgH4LmqvrkX9lLSLR7mCEJyhlXYJSK5HJdQ== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 09/23] filemap: use mapping_min_order while allocating folios Date: Fri, 15 Sep 2023 20:38:34 +0200 Message-Id: <20230915183848.1018717-10-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Allocate al teast mapping_min_order when creating new folio for the filemap in filemap_create_folio() and do_read_cache_folio(). Signed-off-by: Pankaj Raghav --- mm/filemap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 21e1341526ab..e4d46f79e95d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2502,7 +2502,8 @@ static int filemap_create_folio(struct file *file, struct folio *folio; int error; - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0); + folio = filemap_alloc_folio(mapping_gfp_mask(mapping), + mapping_min_folio_order(mapping)); if (!folio) return -ENOMEM; @@ -3696,7 +3697,8 @@ static struct folio *do_read_cache_folio(struct address_space *mapping, repeat: folio = filemap_get_folio(mapping, index); if (IS_ERR(folio)) { - folio = filemap_alloc_folio(gfp, 0); + folio = filemap_alloc_folio(gfp, + mapping_min_folio_order(mapping)); if (!folio) return ERR_PTR(-ENOMEM); err = filemap_add_folio(mapping, folio, index, gfp); From patchwork Fri Sep 15 18:38:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F7B7EED626 for ; Fri, 15 Sep 2023 18:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236871AbjIOSmg (ORCPT ); Fri, 15 Sep 2023 14:42:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237022AbjIOSm1 (ORCPT ); Fri, 15 Sep 2023 14:42:27 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99CB63A84; Fri, 15 Sep 2023 11:40:57 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNJS3W0Xz9sbL; Fri, 15 Sep 2023 20:39:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SzrHPPmJTKjWfDrn3YXBgU9n1TNPo/kiGOZujmxfC3o=; b=zv1eBTD7DSyKB/ZHl4zr3fiwfusUpLKbzkWEafjVoKUx2xfMK7rklsQPPIp19HKeoo3l8c FWeg/koAWMOCS5X6e/FiZsa8S/OteBDH7E1VOVQjIAhxRZxG2855TiW8pZ388XhUkviYq2 bUYv3/X5vM3Wi4vWfiFfR93e4/3fPoNylQkICflCDoirfo2ziNslgUpmYPd0h1VRhy/zgi U9QpS/GtWY71h/CXkrwsXNlNbS9TdIZ8rep7TVuhIWNrq3bhlA9o+chV8bjGafiMcWHOQs lIMWb0XukJXAYtVaua7Mngo8ueNlG8zrZT38tUYnIZNdtKRF4sPJFkbVEB/wNA== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 10/23] filemap: align the index to mapping_min_order in filemap_get_pages() Date: Fri, 15 Sep 2023 20:38:35 +0200 Message-Id: <20230915183848.1018717-11-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Align the index to the mapping_min_order number of pages in filemap_get_pages(). Signed-off-by: Luis Chamberlain --- generic/451 triggers a crash in this path for bs = 16k. mm/filemap.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index e4d46f79e95d..8a4bbddcf575 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2558,14 +2558,17 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, { struct file *filp = iocb->ki_filp; struct address_space *mapping = filp->f_mapping; + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; struct file_ra_state *ra = &filp->f_ra; - pgoff_t index = iocb->ki_pos >> PAGE_SHIFT; + pgoff_t index = round_down(iocb->ki_pos >> PAGE_SHIFT, nrpages); pgoff_t last_index; struct folio *folio; int err = 0; /* "last_index" is the index of the page beyond the end of the read */ last_index = DIV_ROUND_UP(iocb->ki_pos + count, PAGE_SIZE); + last_index = round_up(last_index, nrpages); retry: if (fatal_signal_pending(current)) return -EINTR; @@ -2581,8 +2584,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, if (!folio_batch_count(fbatch)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ)) return -EAGAIN; - err = filemap_create_folio(filp, mapping, - iocb->ki_pos >> PAGE_SHIFT, fbatch); + err = filemap_create_folio(filp, mapping, index, fbatch); if (err == AOP_TRUNCATED_PAGE) goto retry; return err; From patchwork Fri Sep 15 18:38:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A99AEED620 for ; Fri, 15 Sep 2023 18:44:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236715AbjIOSnh (ORCPT ); Fri, 15 Sep 2023 14:43:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237089AbjIOSnD (ORCPT ); Fri, 15 Sep 2023 14:43:03 -0400 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA8A135BF; Fri, 15 Sep 2023 11:40:01 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4RnNJW0zkCz9sZG; Fri, 15 Sep 2023 20:39:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803163; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SlByD57EXAWo3wBYWh+1jomz9iI/v3Qz7x+/AUImBzs=; b=nuY8POIALXl0in4GfMc25TJC8coBeQG43+Ouex9LRzUcKwK8I5c7EEHIH+My1vP7A91IWR JpL8fg2KOkRRAcvKGXz+vA5/mbGmclH2WEI7vYKQZvUUY4PW+gydMVf/nU+wSdB/tEf3iD 7fIcfZwKxubSL8Be38uKFiNVIgOf5DyDHV87W0Y8X7Qk+tOJCtgkLZ61qi2KNIDwl4i7WX csxEGgzn3f5nPGSOcT7rO7zljDx1HiVZ9pDcU5CfZ4XyxCcK3iXZVxeySqazerokuxygMm 51rE3zIIvycA/2oxNMkX9/yceDwSbawy5u9l3xIe2eEB7PyD2gY58zVvZs2hxA== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 11/23] filemap: align the index to mapping_min_order in do_[a]sync_mmap_readahead Date: Fri, 15 Sep 2023 20:38:36 +0200 Message-Id: <20230915183848.1018717-12-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Align the index to the mapping_min_order number of pages in do_[a]sync_mmap_readahead(). Signed-off-by: Pankaj Raghav --- mm/filemap.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 8a4bbddcf575..3853df90f9cf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3164,7 +3164,10 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; struct address_space *mapping = file->f_mapping; - DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff); + int order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1U << order; + pgoff_t index = round_down(vmf->pgoff, nrpages); + DEFINE_READAHEAD(ractl, file, ra, mapping, index); struct file *fpin = NULL; unsigned long vm_flags = vmf->vma->vm_flags; unsigned int mmap_miss; @@ -3216,10 +3219,11 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) */ fpin = maybe_unlock_mmap_for_io(vmf, fpin); ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2); + ra->start = round_down(ra->start, nrpages); ra->size = ra->ra_pages; ra->async_size = ra->ra_pages / 4; ractl._index = ra->start; - page_cache_ra_order(&ractl, ra, 0); + page_cache_ra_order(&ractl, ra, order); return fpin; } @@ -3233,7 +3237,10 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, { struct file *file = vmf->vma->vm_file; struct file_ra_state *ra = &file->f_ra; - DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, vmf->pgoff); + int order = mapping_min_folio_order(file->f_mapping); + unsigned int nrpages = 1U << order; + pgoff_t index = round_down(vmf->pgoff, nrpages); + DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, index); struct file *fpin = NULL; unsigned int mmap_miss; From patchwork Fri Sep 15 18:38:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62BB8EEEC01 for ; Fri, 15 Sep 2023 19:10:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236829AbjIOTJy (ORCPT ); Fri, 15 Sep 2023 15:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236872AbjIOSlN (ORCPT ); Fri, 15 Sep 2023 14:41:13 -0400 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FC043AB3; Fri, 15 Sep 2023 11:40:17 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4RnNJY3FJzz9scf; Fri, 15 Sep 2023 20:39:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803165; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JnevztC5Xa5eXhjQG5gs4g0W7d3IJ4EIEPKbSlyaPoo=; b=k4Yc0OosuSwPR3GTilJO7yik1UFI1e6xzGKpUTNNry/gNO9lqC1Hn++uqllS/hFhBaUp6j 6BBBUZpa/0JEi+Ai24ps3Zh96w+C5hA/SBom+sueFZylFoXiFJ3xBsGSVR7Kx39XhlqT2H YOlLsSed4aroEit7x8wYZ8RvZOJZ+pga6uAq12op8NlVwJZFuRyGjiIOX/IasnXvy7of8E Bw4C9te4wO/7VvuvE2h4jtQPMc4DTeaiNz65EMObnRF4FhfyLBhafL3EofkDiD+TjE1xXb nF23dD6KEojM6yEaCJImMCaCLpilwthMJTX0vmP0FilI3KujhBBnBd4Doz0C6w== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 12/23] filemap: align index to mapping_min_order in filemap_fault() Date: Fri, 15 Sep 2023 20:38:37 +0200 Message-Id: <20230915183848.1018717-13-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav ALign the indices to mapping_min_order number of pages in filemap_fault(). Signed-off-by: Pankaj Raghav --- mm/filemap.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 3853df90f9cf..f97099de80b3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3288,13 +3288,17 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) struct file *file = vmf->vma->vm_file; struct file *fpin = NULL; struct address_space *mapping = file->f_mapping; + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; struct inode *inode = mapping->host; - pgoff_t max_idx, index = vmf->pgoff; + pgoff_t max_idx, index = round_down(vmf->pgoff, nrpages); struct folio *folio; vm_fault_t ret = 0; bool mapping_locked = false; max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + max_idx = round_up(max_idx, nrpages); + if (unlikely(index >= max_idx)) return VM_FAULT_SIGBUS; @@ -3386,13 +3390,17 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * We must recheck i_size under page lock. */ max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + max_idx = round_up(max_idx, nrpages); + if (unlikely(index >= max_idx)) { folio_unlock(folio); folio_put(folio); return VM_FAULT_SIGBUS; } - vmf->page = folio_file_page(folio, index); + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); + + vmf->page = folio_file_page(folio, vmf->pgoff); return ret | VM_FAULT_LOCKED; page_not_uptodate: From patchwork Fri Sep 15 18:38:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7CF2EED621 for ; Fri, 15 Sep 2023 18:41:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236753AbjIOSlA (ORCPT ); Fri, 15 Sep 2023 14:41:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236832AbjIOSkm (ORCPT ); Fri, 15 Sep 2023 14:40:42 -0400 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [IPv6:2001:67c:2050:0:465::103]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDBD035AA; Fri, 15 Sep 2023 11:39:54 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4RnNJc0jbrz9skM; Fri, 15 Sep 2023 20:39:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803168; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=q4TCivlOldBFT8Z2wAp0Ng9dYEvWAP3/q/X1t2+ztPU=; b=RPZ4s8vKpZT/VgCDTXGfaQnmmINoQDMFRzcMXIHnpBidw45xmmsecAsEx/bgHiklwedk3E jvirvcLSdjCEKPoKBTXOwRhiD+JcHZqICcAdTub1hkWK1sl+vUO2I19nRcEBG44QWyKZYa Q9/F/S6P5EG0+NyIw3zXCVgYKM66eP7A+kreb4oIgyIl6glr1J0ELYt+/5YuNN/+WZ++4c gG0NIEJsO5zNGaAPcdC78ZH6UEtst+I+MpVN/IUixQdfIfKRDwezhqBQuACrnDm0eW/GMh 9oc+cvHQHKIr4P0CNSWVZ8eOFxkQ1CJn2IaTfklFETTMeeOw5hpiP+jYbhuJCg== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 13/23] readahead: set file_ra_state->ra_pages to be at least mapping_min_order Date: Fri, 15 Sep 2023 20:38:38 +0200 Message-Id: <20230915183848.1018717-14-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJc0jbrz9skM Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Set the file_ra_state->ra_pages in file_ra_state_init() to be at least mapping_min_order of pages if the bdi->ra_pages is less than that. Signed-off-by: Luis Chamberlain --- mm/readahead.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/readahead.c b/mm/readahead.c index ef3b23a41973..5c4e7ee64dc1 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -138,7 +138,13 @@ void file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping) { + unsigned int order = mapping_min_folio_order(mapping); + unsigned int min_nrpages = 1U << order; + unsigned int max_pages = inode_to_bdi(mapping->host)->io_pages; + ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages; + if (ra->ra_pages < min_nrpages && min_nrpages < max_pages) + ra->ra_pages = min_nrpages; ra->prev_pos = -1; } EXPORT_SYMBOL_GPL(file_ra_state_init); From patchwork Fri Sep 15 18:38:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 081A8EED625 for ; Fri, 15 Sep 2023 18:42:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236807AbjIOSmE (ORCPT ); Fri, 15 Sep 2023 14:42:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236825AbjIOSlp (ORCPT ); Fri, 15 Sep 2023 14:41:45 -0400 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23A344206; Fri, 15 Sep 2023 11:40:22 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4RnNJg0GXfz9spB; Fri, 15 Sep 2023 20:39:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803171; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=591wcX/aNYK1i2+Gpz2G47nBurFy4yzLeb3z8BH/s7c=; b=vcPcMUF61bQ2Sox+rVmcmmFCT5EqJKJRSDzMWuT3qx6Cthz50vAI4MWCBIJ+/FXhvQQT73 aUeCtC3yGQQ9yF6AYkBBl/xbuC7mwlJcI6rdrJUfFIxTR6mKGWW+d8riBrqwJycp2FTOK+ rSdoOS7Y9ZY2iIqkKtC/1Mzz0BmkWZt3aY7oRXjFsQGGWCf7vZNYfMMwkyXPXqdMYBuSqv YshJdlp/P1VfVHttkHmJg2rDfDIzV5tFlpdaI9rPLXUlzcICnaqjt1ipzU55sAxa6mse0B f3qJ8/OF95BjLF3LDOCPsStttf1OOn3/f/YPMTzUtd/kfRx+XXvz7QdxF3gFbA== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 14/23] readahead: allocate folios with mapping_min_order in ra_unbounded() Date: Fri, 15 Sep 2023 20:38:39 +0200 Message-Id: <20230915183848.1018717-15-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJg0GXfz9spB Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Allocate folios with mapping_min_order order in page_cache_ra_unbounded(). Also adjust the accounting to take the folio_nr_pages in the loop. Signed-off-by: Pankaj Raghav --- mm/readahead.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 5c4e7ee64dc1..2a9e9020b7cf 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -250,7 +250,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, continue; } - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, + mapping_min_folio_order(mapping)); if (!folio) break; if (filemap_add_folio(mapping, folio, index + i, @@ -264,7 +265,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, if (i == nr_to_read - lookahead_size) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); - ractl->_nr_pages++; + ractl->_nr_pages += folio_nr_pages(folio); + i += folio_nr_pages(folio) - 1; } /* From patchwork Fri Sep 15 18:38:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C1E4EED61A for ; Fri, 15 Sep 2023 18:44:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236748AbjIOSni (ORCPT ); Fri, 15 Sep 2023 14:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236996AbjIOSnU (ORCPT ); Fri, 15 Sep 2023 14:43:20 -0400 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [IPv6:2001:67c:2050:0:465::101]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 756023A9A; Fri, 15 Sep 2023 11:40:06 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4RnNJj5YhYz9spC; Fri, 15 Sep 2023 20:39:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803173; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yxVtlX4JC4EQtNnH6RIJ+6iaeftg/wvFMQDW4McP7fo=; b=fhFmL/xOdq9mWLRpb+jiRZQGXQH+v9Jn5su2cxMbr34SSR2VWXkAgjIRsLKaW4gQu44flC ac99JAjExWoh/6PTq9AGdXvJTewS4mWWjcU8AwxzTqTqhvOnuPZcZ5HgczOylODoRurLAS dBOKCG5FBvoZxsnu9YkS7LcQp0YemfRvv5zpGUolR7hVqtYLEuI+LrAh7OmYZQmhFgbnSd cNe8R2/Kte9KfMu0Vz5bUpVjB4o6PFg6ddvgFTssdCPEm9woE0ZYBLMWscLd5hZFJV0luH kv6+9jNXZ5NWVNUcbSomGZ/kPoUmVQTyJIYr4I4QDmrpyENc7FzrWcm3inttmA== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 15/23] readahead: align with mapping_min_order in force_page_cache_ra() Date: Fri, 15 Sep 2023 20:38:40 +0200 Message-Id: <20230915183848.1018717-16-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJj5YhYz9spC Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Align the index to mapping_min_order in force_page_cache_ra(). This will ensure that the folios allocated for readahead that are added to the page cache are aligned to mapping_min_order. Signed-off-by: Pankaj Raghav --- mm/readahead.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/readahead.c b/mm/readahead.c index 2a9e9020b7cf..838dd9ca8dad 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -318,6 +318,8 @@ void force_page_cache_ra(struct readahead_control *ractl, struct file_ra_state *ra = ractl->ra; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); unsigned long max_pages, index; + unsigned int folio_order = mapping_min_folio_order(mapping); + unsigned int nr_of_pages = (1 << folio_order); if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) return; @@ -327,6 +329,13 @@ void force_page_cache_ra(struct readahead_control *ractl, * be up to the optimal hardware IO size */ index = readahead_index(ractl); + if (folio_order && (index & (nr_of_pages - 1))) { + unsigned long old_index = index; + + index = round_down(index, nr_of_pages); + nr_to_read += (old_index - index); + } + max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); nr_to_read = min_t(unsigned long, nr_to_read, max_pages); while (nr_to_read) { @@ -335,6 +344,7 @@ void force_page_cache_ra(struct readahead_control *ractl, if (this_chunk > nr_to_read) this_chunk = nr_to_read; ractl->_index = index; + VM_BUG_ON(index & (nr_of_pages - 1)); do_page_cache_ra(ractl, this_chunk, 0); index += this_chunk; From patchwork Fri Sep 15 18:38:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BFD6EED622 for ; Fri, 15 Sep 2023 18:41:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236585AbjIOSlc (ORCPT ); Fri, 15 Sep 2023 14:41:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236829AbjIOSlM (ORCPT ); Fri, 15 Sep 2023 14:41:12 -0400 X-Greylist: delayed 70 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Fri, 15 Sep 2023 11:40:08 PDT Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [IPv6:2001:67c:2050:0:465::202]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E965E2719; Fri, 15 Sep 2023 11:40:08 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [10.196.197.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNJn36Rwz9sbW; Fri, 15 Sep 2023 20:39:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iOYqPSu+oVCOd89+RaAtiuAgqa7oaErhDOx7o0+X38E=; b=flQNjUnyU25W7ZGoZco4mDtUzalhwJdm/Bip/rbg2I7G+1AE2WK86hUhNm9re8XYQltTbW Q1UDpffAJOx5HxpxhTPWaT+BSJpH86ccESygaGzrHRoAl5a6HWVXDBTEyaXOGrB36BUlFJ UJidHKi+CEgQGEa5bx3L4Zay4lNQkQrS+WRXtZcwIJXqP8EAA8xyZ7WjuimYFHT6oowESq Fo+uCzgXisnLoV5PFAOd1As8LnA3i8F2vc9tZxxok3KJTKvhHophEy6lJ04F5KBoY0aNft 1ZXo8HQVFMQ3CahHpKra2M7qoAf4kdCWBLeQrGgqKkHiklzRWbTaCAql3Y6Xfw== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 16/23] readahead: add folio with at least mapping_min_order in page_cache_ra_order Date: Fri, 15 Sep 2023 20:38:41 +0200 Message-Id: <20230915183848.1018717-17-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Set the folio order to at least mapping_min_order before calling ra_alloc_folio(). Signed-off-by: Luis Chamberlain --- mm/readahead.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/readahead.c b/mm/readahead.c index 838dd9ca8dad..fb5ff180c39e 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -506,6 +506,7 @@ void page_cache_ra_order(struct readahead_control *ractl, { struct address_space *mapping = ractl->mapping; pgoff_t index = readahead_index(ractl); + unsigned int min_order = mapping_min_folio_order(mapping); pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; pgoff_t mark = index + ra->size - ra->async_size; int err = 0; @@ -535,10 +536,16 @@ void page_cache_ra_order(struct readahead_control *ractl, order = 0; } /* Don't allocate pages past EOF */ - while (index + (1UL << order) - 1 > limit) { + while (order > min_order && index + (1UL << order) - 1 > limit) { if (--order == 1) order = 0; } + + if (order < min_order) + order = min_order; + + VM_BUG_ON(index & ((1UL << order) - 1)); + err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) break; From patchwork Fri Sep 15 18:38:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F311EED621 for ; Fri, 15 Sep 2023 18:44:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236866AbjIOSoK (ORCPT ); Fri, 15 Sep 2023 14:44:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237078AbjIOSnh (ORCPT ); Fri, 15 Sep 2023 14:43:37 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [IPv6:2001:67c:2050:0:465::202]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA948449E; Fri, 15 Sep 2023 11:40:45 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNJr2x9Tz9sbr; Fri, 15 Sep 2023 20:39:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2MKZNT2C20PsiAht17s5oJnb138+SmsAPSOiRhddgUA=; b=OSDk9FFxIdRoXpbbJK7NtCBBEbBqbJXSbQ07rvOBw++aALFk+mh/xxdarUdNR4FVlQRF17 67UxVmkLS2OLxkjk+Ll9RummAEi7HciYPbO9Z2cBpMGtt24csMF9mj/Hk7zipaGkJ6uh32 Ct8c4gPXLiSnOw01drlPlIaLlnRBVRgWowE8ZQbP6rTJbkllrV9wlYlk84VeHiSctXNOjP abj4O3zIc0YGlB/1ZOy/IwN/jtsEoCNZu2tkZ/e4qSfNNxLpodXBEK3pvwM6c9CuxXqvzD UdDGI/JtrIJGBBxolWDhZOsfoPzAosRAcphz0UPvnj346nrxjmRCHSmp2JtQ2w== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 17/23] readahead: set the minimum ra size in get_(init|next)_ra Date: Fri, 15 Sep 2023 20:38:42 +0200 Message-Id: <20230915183848.1018717-18-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJr2x9Tz9sbr Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Make sure the minimum ra size is based on mapping_min_order in get_init_ra() and get_next_ra(). If request ra size is greater than mapping_min_order of pages, align it to mapping_min_order of pages. Signed-off-by: Luis Chamberlain --- mm/readahead.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index fb5ff180c39e..7c2660815a01 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -357,9 +357,17 @@ void force_page_cache_ra(struct readahead_control *ractl, * for small size, x 4 for medium, and x 2 for large * for 128k (32 page) max ra * 1-2 page = 16k, 3-4 page 32k, 5-8 page = 64k, > 8 page = 128k initial + * + * For higher order address space requirements we ensure no initial reads + * are ever less than the min number of pages required. + * + * We *always* cap the max io size allowed by the device. */ -static unsigned long get_init_ra_size(unsigned long size, unsigned long max) +static unsigned long get_init_ra_size(unsigned long size, + unsigned int min_order, + unsigned long max) { + unsigned int min_nrpages = 1UL << min_order; unsigned long newsize = roundup_pow_of_two(size); if (newsize <= max / 32) @@ -369,6 +377,15 @@ static unsigned long get_init_ra_size(unsigned long size, unsigned long max) else newsize = max; + if (newsize < min_nrpages) { + if (min_nrpages <= max) + newsize = min_nrpages; + else + newsize = round_up(max, min_nrpages); + } + + VM_BUG_ON(newsize & (min_nrpages - 1)); + return newsize; } @@ -377,14 +394,19 @@ static unsigned long get_init_ra_size(unsigned long size, unsigned long max) * return it as the new window size. */ static unsigned long get_next_ra_size(struct file_ra_state *ra, + unsigned int min_order, unsigned long max) { - unsigned long cur = ra->size; + unsigned int min_nrpages = 1UL << min_order; + unsigned long cur = max(ra->size, min_nrpages); + + cur = round_down(cur, min_nrpages); if (cur < max / 16) return 4 * cur; if (cur <= max / 2) return 2 * cur; + return max; } From patchwork Fri Sep 15 18:38:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 759C7EED62A for ; Fri, 15 Sep 2023 18:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236770AbjIOSml (ORCPT ); Fri, 15 Sep 2023 14:42:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236981AbjIOSmS (ORCPT ); Fri, 15 Sep 2023 14:42:18 -0400 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52A324695; Fri, 15 Sep 2023 11:40:54 -0700 (PDT) Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4RnNJt60W1z9sTy; Fri, 15 Sep 2023 20:39:42 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LywLEuPyM84LvnszU9bu3yYyFXSjfbQxmSaZo0Anuqo=; b=sPBj7+lPJjN1O52TPN3bPIvCgfHWewlpa4lD2gPArpjJc42qjZjYB0ivxNzVoPtgWUV2X9 baXqoaVEy427TKtHRyh9CuvB1dGk4UoKPWdumhCVK/Zj7hjnyV/9B5lvcIJez7O/RYML7a 7BbpTXkSIEysivDSIn8uG2RYC8Ka8TO0B9P19XqbBg8Pj8JfaNqqdtPoeNMsD2lb4sD0eY cFRHPxdkrbiSJb37sUgM291kELuaAdt9gvlmR97ZVvHYeejas3FF4Zlhd05N2OYrwNWsrp 2xibhXY+ahHHlr/15+T/ltQEQlsnkfS5en8VhTucFA1GfYp0wjO7bbHAmSHM9Q== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 18/23] readahead: align ra start and size to mapping_min_order in ondemand_ra() Date: Fri, 15 Sep 2023 20:38:43 +0200 Message-Id: <20230915183848.1018717-19-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJt60W1z9sTy Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Align the ra->start and ra->size to mapping_min_order in ondemand_readahead(). This will ensure the folios added to the page_cache will be aligned to mapping_min_order number of pages. Signed-off-by: Luis Chamberlain --- mm/readahead.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 7c2660815a01..03fa6f6c8145 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -605,7 +605,11 @@ static void ondemand_readahead(struct readahead_control *ractl, unsigned long add_pages; pgoff_t index = readahead_index(ractl); pgoff_t expected, prev_index; - unsigned int order = folio ? folio_order(folio) : 0; + unsigned int min_order = mapping_min_folio_order(ractl->mapping); + unsigned int min_nrpages = 1UL << min_order; + unsigned int order = folio ? folio_order(folio) : min_order; + + VM_BUG_ON(ractl->_index & (min_nrpages - 1)); /* * If the request exceeds the readahead window, allow the read to @@ -627,9 +631,13 @@ static void ondemand_readahead(struct readahead_control *ractl, expected = round_up(ra->start + ra->size - ra->async_size, 1UL << order); if (index == expected || index == (ra->start + ra->size)) { - ra->start += ra->size; - ra->size = get_next_ra_size(ra, max_pages); + ra->start += round_down(ra->size, min_nrpages); + ra->size = get_next_ra_size(ra, min_order, max_pages); ra->async_size = ra->size; + + VM_BUG_ON(ra->size & ((1UL << min_order) - 1)); + VM_BUG_ON(ra->start & ((1UL << min_order) - 1)); + goto readit; } @@ -647,13 +655,19 @@ static void ondemand_readahead(struct readahead_control *ractl, max_pages); rcu_read_unlock(); + start = round_down(start, min_nrpages); + + VM_BUG_ON(start & (min_nrpages - 1)); + VM_BUG_ON(folio->index & (folio_nr_pages(folio) - 1)); + if (!start || start - index > max_pages) return; ra->start = start; ra->size = start - index; /* old async_size */ - ra->size += req_size; - ra->size = get_next_ra_size(ra, max_pages); + VM_BUG_ON(ra->size & (min_nrpages - 1)); + ra->size += round_up(req_size, min_nrpages); + ra->size = get_next_ra_size(ra, min_order, max_pages); ra->async_size = ra->size; goto readit; } @@ -690,7 +704,7 @@ static void ondemand_readahead(struct readahead_control *ractl, initial_readahead: ra->start = index; - ra->size = get_init_ra_size(req_size, max_pages); + ra->size = get_init_ra_size(req_size, min_order, max_pages); ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; readit: @@ -701,7 +715,7 @@ static void ondemand_readahead(struct readahead_control *ractl, * Take care of maximum IO pages as above. */ if (index == ra->start && ra->size == ra->async_size) { - add_pages = get_next_ra_size(ra, max_pages); + add_pages = get_next_ra_size(ra, min_order, max_pages); if (ra->size + add_pages <= max_pages) { ra->async_size = add_pages; ra->size += add_pages; @@ -712,6 +726,7 @@ static void ondemand_readahead(struct readahead_control *ractl, } ractl->_index = ra->start; + VM_BUG_ON(ractl->_index & (min_nrpages - 1)); page_cache_ra_order(ractl, ra, order); } From patchwork Fri Sep 15 18:38:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C655EED61E for ; Fri, 15 Sep 2023 18:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236625AbjIOSmC (ORCPT ); Fri, 15 Sep 2023 14:42:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236866AbjIOSlq (ORCPT ); Fri, 15 Sep 2023 14:41:46 -0400 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [IPv6:2001:67c:2050:0:465::101]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B93D12D4B; Fri, 15 Sep 2023 11:40:16 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4RnNJx3l54z9sTN; Fri, 15 Sep 2023 20:39:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rV9SUN768+vUSf43/qCLKk3hEIsfVKLWNcCU7V4zUtQ=; b=eI+gl/BNfS9Dhty41um4KR+wRrFDQO+mDTbD65wgK9zPsLejVpU804xRIAjk7oY4J+Azil IVzX5dngUXpyfVClHiUUCaBnCc+02/4pud5dgOX6JQnyg72eUcxNj6QU8Sc/dnYHpcWObe paf2iuxl+EBFXfzLebEiH8GmUnBXq2RbfhMZtDylytHzq6bcq/Gpwf9XOVdf78jI04ETB+ xT7AyyaXTYUNMDkeQe91i2XK/OSfmotxpNcOTnfyY/W3bQz1zAy50mXsvOq+rw1ZgADZ8w TPwbbIX4YN/46fNwwcoiCqVkUIJwiBKy2eunAvqT3IHljCII7Tbs1TqQdCUCGg== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 19/23] truncate: align index to mapping_min_order Date: Fri, 15 Sep 2023 20:38:44 +0200 Message-Id: <20230915183848.1018717-20-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNJx3l54z9sTN Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain Align indices to mapping_min_order in invalidate_inode_pages2_range(), mapping_try_invalidate() and truncate_inode_pages_range(). This is necessary to keep the folios added to the page cache aligned with mapping_min_order. Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav --- mm/truncate.c | 34 ++++++++++++++++++++++++---------- 1 file changed, 24 insertions(+), 10 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 8e3aa9e8618e..d5ce8e30df70 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -337,6 +337,8 @@ void truncate_inode_pages_range(struct address_space *mapping, int i; struct folio *folio; bool same_folio; + unsigned int order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1U << order; if (mapping_empty(mapping)) return; @@ -347,7 +349,9 @@ void truncate_inode_pages_range(struct address_space *mapping, * start of the range and 'partial_end' at the end of the range. * Note that 'end' is exclusive while 'lend' is inclusive. */ - start = (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT; + start = (lstart + (nrpages * PAGE_SIZE) - 1) >> PAGE_SHIFT; + start = round_down(start, nrpages); + if (lend == -1) /* * lend == -1 indicates end-of-file so we have to set 'end' @@ -356,7 +360,7 @@ void truncate_inode_pages_range(struct address_space *mapping, */ end = -1; else - end = (lend + 1) >> PAGE_SHIFT; + end = round_down((lend + 1) >> PAGE_SHIFT, nrpages); folio_batch_init(&fbatch); index = start; @@ -372,8 +376,9 @@ void truncate_inode_pages_range(struct address_space *mapping, cond_resched(); } - same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT); - folio = __filemap_get_folio(mapping, lstart >> PAGE_SHIFT, FGP_LOCK, 0); + same_folio = round_down(lstart >> PAGE_SHIFT, nrpages) == + round_down(lend >> PAGE_SHIFT, nrpages); + folio = __filemap_get_folio(mapping, start, FGP_LOCK, 0); if (!IS_ERR(folio)) { same_folio = lend < folio_pos(folio) + folio_size(folio); if (!truncate_inode_partial_folio(folio, lstart, lend)) { @@ -387,7 +392,8 @@ void truncate_inode_pages_range(struct address_space *mapping, } if (!same_folio) { - folio = __filemap_get_folio(mapping, lend >> PAGE_SHIFT, + folio = __filemap_get_folio(mapping, + round_down(lend >> PAGE_SHIFT, nrpages), FGP_LOCK, 0); if (!IS_ERR(folio)) { if (!truncate_inode_partial_folio(folio, lstart, lend)) @@ -497,15 +503,18 @@ EXPORT_SYMBOL(truncate_inode_pages_final); unsigned long mapping_try_invalidate(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_failed) { + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; - pgoff_t index = start; + pgoff_t index = round_up(start, nrpages); + pgoff_t end_idx = round_down(end, nrpages); unsigned long ret; unsigned long count = 0; int i; folio_batch_init(&fbatch); - while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { + while (find_lock_entries(mapping, &index, end_idx, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; @@ -618,9 +627,11 @@ static int folio_launder(struct address_space *mapping, struct folio *folio) int invalidate_inode_pages2_range(struct address_space *mapping, pgoff_t start, pgoff_t end) { + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; - pgoff_t index; + pgoff_t index, end_idx; int i; int ret = 0; int ret2 = 0; @@ -630,8 +641,9 @@ int invalidate_inode_pages2_range(struct address_space *mapping, return 0; folio_batch_init(&fbatch); - index = start; - while (find_get_entries(mapping, &index, end, &fbatch, indices)) { + index = round_up(start, nrpages); + end_idx = round_down(end, nrpages); + while (find_get_entries(mapping, &index, end_idx, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; @@ -660,6 +672,8 @@ int invalidate_inode_pages2_range(struct address_space *mapping, continue; } VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]), folio); + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); + VM_BUG_ON_FOLIO(folio->index & (nrpages - 1), folio); folio_wait_writeback(folio); if (folio_mapped(folio)) From patchwork Fri Sep 15 18:38:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C388EED629 for ; Fri, 15 Sep 2023 18:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236896AbjIOSmh (ORCPT ); Fri, 15 Sep 2023 14:42:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237052AbjIOSma (ORCPT ); Fri, 15 Sep 2023 14:42:30 -0400 X-Greylist: delayed 99 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Fri, 15 Sep 2023 11:40:57 PDT Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA0F446B9; Fri, 15 Sep 2023 11:40:57 -0700 (PDT) Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4RnNK03XV2z9svg; Fri, 15 Sep 2023 20:39:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803188; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZopmvSWX8ME3+zE0xLlLj6n28P2bDkBcY8o03Qh+UqI=; b=h0SxLgQv6vJA6JwNxNVopMuk8gFh3dpnTipX3yD3gI900/92roHKFWISKT5EJI3wjM9Naf H6yZ4HQEjxk5IdQCF6prjkl7lI5pXvtu9GxWSpa+Du9TtKgF+FqptplkTRnG+dhPZwnA8m Btc1TNZQOt5CE/68d1sxPOoODwEE9dFTEUkkoPr37EiTQ50/jRoHUZmNBHz1JHwYkE+OZE BPHSoI9PwJXlfPk9ZynUha6TZTNjWfrH4kJTDhHqX2Ktx+PHv9KPF8iwnD9wdArlP0aZmh jshWljg/ktY+MJi3EV7SfFTaY2P7wzKqkE3C2+QdFUD16Ip7mIXMcOKu/Exo3Q== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 20/23] mm: round down folio split requirements Date: Fri, 15 Sep 2023 20:38:45 +0200 Message-Id: <20230915183848.1018717-21-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNK03XV2z9svg Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Luis Chamberlain When we truncate we always check if we can split a large folio, we do this by checking the userspace mapped pages match folio_nr_pages() - 1, but if we are using a filesystem or a block device which has a min order it must be respected and we should only split rounding down to the min order page requirements. Signed-off-by: Luis Chamberlain --- mm/huge_memory.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f899b3500419..e608a805c79f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2617,16 +2617,24 @@ static void __split_huge_page(struct page *page, struct list_head *list, bool can_split_folio(struct folio *folio, int *pextra_pins) { int extra_pins; + unsigned int min_order = 0; + unsigned int nrpages; /* Additional pins from page cache */ - if (folio_test_anon(folio)) + if (folio_test_anon(folio)) { extra_pins = folio_test_swapcache(folio) ? folio_nr_pages(folio) : 0; - else + } else { extra_pins = folio_nr_pages(folio); + if (folio->mapping) + min_order = mapping_min_folio_order(folio->mapping); + } + + nrpages = 1UL << min_order; + if (pextra_pins) *pextra_pins = extra_pins; - return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - 1; + return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - nrpages; } /* From patchwork Fri Sep 15 18:38:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A98F6EED622 for ; Fri, 15 Sep 2023 18:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236780AbjIOSmC (ORCPT ); Fri, 15 Sep 2023 14:42:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236864AbjIOSlq (ORCPT ); Fri, 15 Sep 2023 14:41:46 -0400 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [IPv6:2001:67c:2050:0:465::102]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 926A64232; Fri, 15 Sep 2023 11:40:19 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4RnNK30KtMz9scy; Fri, 15 Sep 2023 20:39:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803191; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yBandIomLcSEx0nPSvrcvRyAh/l4Pv1QDX49YB+LVO8=; b=mYMhQTCPSuTWWpI2a7DZoXA2b4Z2aKNhpG6QuCtm6/2AiRPlUnh5ztr81m0prSgYeB5R2N Ubm+xxqIg8F/FN+j3+bmMvc3lFgXMlwoMTJVlrwlxDrqWuSm2qLZyxjNlwSHqn/pZs5Wqv 4Ex7wSgqjRITqAxHX5nWM2terZrh1tp/A7U7RddsO8AguL/6WxFiG+uPzAnJusn8ziCa9d SQjJDxzA7meuf23dlFCDvgDHn4S9+cb3doEld4C/rY7esq1tecmimNp0kyvrO1IowB0WOO pbte7nFfRvCr28HTfjEYrVOMsJlaYLN0vE+DazB0umhH//atGpqlgDbXCGO0mg== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com, Dave Chinner Subject: [RFC 21/23] xfs: expose block size in stat Date: Fri, 15 Sep 2023 20:38:46 +0200 Message-Id: <20230915183848.1018717-22-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner For block size larger than page size, the unit of efficient IO is the block size, not the page size. Leaving stat() to report PAGE_SIZE as the block size causes test programs like fsx to issue illegal ranges for operations that require block size alignment (e.g. fallocate() insert range). Hence update the preferred IO size to reflect the block size in this case. Signed-off-by: Dave Chinner [mcgrof: forward rebase in consideration for commit dd2d535e3fb29d ("xfs: cleanup calculating the stat optimal I/O size")] Signed-off-by: Luis Chamberlain --- fs/xfs/xfs_iops.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index 2ededd3f6b8c..080a79a81c46 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -515,6 +515,8 @@ xfs_stat_blksize( struct xfs_inode *ip) { struct xfs_mount *mp = ip->i_mount; + unsigned long default_size = max_t(unsigned long, PAGE_SIZE, + mp->m_sb.sb_blocksize); /* * If the file blocks are being allocated from a realtime volume, then @@ -543,7 +545,7 @@ xfs_stat_blksize( return 1U << mp->m_allocsize_log; } - return PAGE_SIZE; + return default_size; } STATIC int From patchwork Fri Sep 15 18:38:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3BE2EED623 for ; Fri, 15 Sep 2023 18:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236813AbjIOSmG (ORCPT ); Fri, 15 Sep 2023 14:42:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236983AbjIOSlw (ORCPT ); Fri, 15 Sep 2023 14:41:52 -0400 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [IPv6:2001:67c:2050:0:465::201]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B978B4212; Fri, 15 Sep 2023 11:40:25 -0700 (PDT) Received: from smtp102.mailbox.org (smtp102.mailbox.org [IPv6:2001:67c:2050:b231:465::102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4RnNK60pC9z9sW2; Fri, 15 Sep 2023 20:39:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VZYs+58deJjiPO8x76TVvnJ/svOQYbXd85JbMK60DK4=; b=nbK0ltBUmQsL56D0EhZ4txGadxEvVy4TubQU3sjKYn6rLPyAwbUdURHJ8b7n3zRdtyN/4Z ypn7d+WKBepAkrMqlYrcpEnjSMQaoXkHFqFzoi6xAKmqYIcE4d5nhd2NJIIkCltxDGFZqX aPiRGdlaOPgV/tMGFjiJ/rnPBUDvjHKssw/KzOPJoi2RRiMVQtr/SMPLsZ5ZmivBbieE89 tSpFP1+7JVhzls4KDO9r0MQkSh811k3tpn2w1NcQ02D8BjCZtKlrXgjf8qjYLwmyE0yjlD nMlVsGHg5BAV06syTuM4YAtZ4/po/zbqrO9EN8APt8gc76aXRTMNwttPDzWfeQ== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 22/23] xfs: enable block size larger than page size support Date: Fri, 15 Sep 2023 20:38:47 +0200 Message-Id: <20230915183848.1018717-23-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4RnNK60pC9z9sW2 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Currently we don't support blocksize that is twice the page size due to the limitation of having at least three pages in a large folio[1]. [1] https://lore.kernel.org/all/ZH0GvxAdw1RO2Shr@casper.infradead.org/ Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav --- fs/xfs/xfs_mount.c | 9 +++++++-- fs/xfs/xfs_super.c | 7 ++----- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index aed5be5508fe..4272898c508a 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -131,11 +131,16 @@ xfs_sb_validate_fsb_count( xfs_sb_t *sbp, uint64_t nblocks) { - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); + unsigned long mapping_count; + + if (sbp->sb_blocklog <= PAGE_SHIFT) + mapping_count = nblocks >> (PAGE_SHIFT - sbp->sb_blocklog); + else + mapping_count = nblocks << (sbp->sb_blocklog - PAGE_SHIFT); /* Limited by ULONG_MAX of page cache index */ - if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX) + if (mapping_count > ULONG_MAX) return -EFBIG; return 0; } diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 1f77014c6e1a..75bf4d23051c 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1651,13 +1651,10 @@ xfs_fs_fill_super( goto out_free_sb; } - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ - if (mp->m_sb.sb_blocksize > PAGE_SIZE) { + if (mp->m_sb.sb_blocksize == (2 * PAGE_SIZE)) { xfs_warn(mp, "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", + "Blocksize that is twice the pagesize %ld does not currently work.", mp->m_sb.sb_blocksize, PAGE_SIZE); error = -ENOSYS; goto out_free_sb; From patchwork Fri Sep 15 18:38:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13387528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6916DEED620 for ; Fri, 15 Sep 2023 18:44:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236826AbjIOSoJ (ORCPT ); Fri, 15 Sep 2023 14:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236842AbjIOSni (ORCPT ); Fri, 15 Sep 2023 14:43:38 -0400 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [IPv6:2001:67c:2050:0:465::202]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93EE6449D; Fri, 15 Sep 2023 11:40:44 -0700 (PDT) Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4RnNK84kWKz9sWQ; Fri, 15 Sep 2023 20:39:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1694803196; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rdyOm0E20W1O08Haq+wboPt6O9fnOeXgM7VNGHdT/wc=; b=YWBaYLR8wP7FiydPN5L35PApcVVJVdWdr9JQ0wve5greLck+YNw4kh4nrPZG0I3Irb8ki0 iuokOs97BSQFNTgm6YhdORbefrutGfT1KELpkS/3XMgdCJ9oOH4CtnvKt/LPo0PALntzFv p3aHYP5rCYGypOCX8eQG6C8LNjayrhZbhVYUtmATnzcj1I2hW2uQpwW6y/BQVis0Zw0/VG 5DwtrDMNHLI60IzV77LR+S96bZ65EDdStUam1CBD+ng7P3QZexi7+Gw76NWeAcNZn30u4d ZF4HNoSpiJ8j4mP9R3KSBmGwbUH3gqxJZAlgZODdsgOKSUDIhWjurPKl2hsTaw== From: Pankaj Raghav To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: p.raghav@samsung.com, david@fromorbit.com, da.gomez@samsung.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, willy@infradead.org, djwong@kernel.org, linux-mm@kvack.org, chandan.babu@oracle.com, mcgrof@kernel.org, gost.dev@samsung.com Subject: [RFC 23/23] xfs: set minimum order folio for page cache based on blocksize Date: Fri, 15 Sep 2023 20:38:48 +0200 Message-Id: <20230915183848.1018717-24-kernel@pankajraghav.com> In-Reply-To: <20230915183848.1018717-1-kernel@pankajraghav.com> References: <20230915183848.1018717-1-kernel@pankajraghav.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Pankaj Raghav Enabling a block size > PAGE_SIZE is only possible if we can ensure that the filesystem allocations for the block size is treated atomically and we do this with the min order folio requirement for the inode. This allows the page cache to treat this inode atomically even if on the block layer we may treat it separately. For instance, on x86 this enables eventual usage of block size > 4k so long as you use a sector size set of 4k. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain --- fs/xfs/xfs_icache.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index aacc7eec2497..81f07503f5ca 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -73,6 +73,7 @@ xfs_inode_alloc( xfs_ino_t ino) { struct xfs_inode *ip; + int min_order = 0; /* * XXX: If this didn't occur in transactions, we could drop GFP_NOFAIL @@ -88,7 +89,8 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode or i_state! */ VFS_I(ip)->i_mode = 0; VFS_I(ip)->i_state = 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + min_order = max(min_order, ilog2(mp->m_sb.sb_blocksize) - PAGE_SHIFT); + mapping_set_folio_orders(VFS_I(ip)->i_mapping, min_order, MAX_PAGECACHE_ORDER); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -313,6 +315,7 @@ xfs_reinit_inode( dev_t dev = inode->i_rdev; kuid_t uid = inode->i_uid; kgid_t gid = inode->i_gid; + int min_order = 0; error = inode_init_always(mp->m_super, inode); @@ -323,7 +326,8 @@ xfs_reinit_inode( inode->i_rdev = dev; inode->i_uid = uid; inode->i_gid = gid; - mapping_set_large_folios(inode->i_mapping); + min_order = max(min_order, ilog2(mp->m_sb.sb_blocksize) - PAGE_SHIFT); + mapping_set_folio_orders(inode->i_mapping, min_order, MAX_PAGECACHE_ORDER); return error; }