diff mbox series

[RFC,07/23] filemap: align the index to mapping_min_order in __filemap_add_folio()

Message ID 20230915183848.1018717-8-kernel@pankajraghav.com (mailing list archive)
State New, archived
Headers show
Series Enable block size > page size in XFS | expand

Commit Message

Pankaj Raghav (Samsung) Sept. 15, 2023, 6:38 p.m. UTC
From: Luis Chamberlain <mcgrof@kernel.org>

Align the index to the mapping_min_order number of pages while setting
the XA_STATE and xas_set_order().

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 mm/filemap.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

Comments

Matthew Wilcox Sept. 15, 2023, 7:48 p.m. UTC | #1
On Fri, Sep 15, 2023 at 08:38:32PM +0200, Pankaj Raghav wrote:
> From: Luis Chamberlain <mcgrof@kernel.org>
> 
> Align the index to the mapping_min_order number of pages while setting
> the XA_STATE and xas_set_order().

Not sure why this one's necessary either.  The index should already be
aligned to folio_order.

Some bits of it are clearly needed, like checking that folio_order() >=
min_order.
Luis Chamberlain Sept. 18, 2023, 6:32 p.m. UTC | #2
On Fri, Sep 15, 2023 at 08:48:43PM +0100, Matthew Wilcox wrote:
> On Fri, Sep 15, 2023 at 08:38:32PM +0200, Pankaj Raghav wrote:
> > From: Luis Chamberlain <mcgrof@kernel.org>
> > 
> > Align the index to the mapping_min_order number of pages while setting
> > the XA_STATE and xas_set_order().
> 
> Not sure why this one's necessary either.  The index should already be
> aligned to folio_order.

Oh, it was not obvious, would a VM_BUG_ON_FOLIO() be OK then?

> Some bits of it are clearly needed, like checking that folio_order() >=
> min_order.

Thanks,

  Luis
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 33de71bfa953..15bc810bfc89 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -859,7 +859,10 @@  EXPORT_SYMBOL_GPL(replace_page_cache_folio);
 noinline int __filemap_add_folio(struct address_space *mapping,
 		struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp)
 {
-	XA_STATE(xas, &mapping->i_pages, index);
+	unsigned int min_order = mapping_min_folio_order(mapping);
+	unsigned int nr_of_pages = (1U << min_order);
+	pgoff_t rounded_index = round_down(index, nr_of_pages);
+	XA_STATE(xas, &mapping->i_pages, rounded_index);
 	int huge = folio_test_hugetlb(folio);
 	bool charged = false;
 	long nr = 1;
@@ -875,8 +878,8 @@  noinline int __filemap_add_folio(struct address_space *mapping,
 		charged = true;
 	}
 
-	VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio);
-	xas_set_order(&xas, index, folio_order(folio));
+	VM_BUG_ON_FOLIO(rounded_index & (folio_nr_pages(folio) - 1), folio);
+	xas_set_order(&xas, rounded_index, folio_order(folio));
 	nr = folio_nr_pages(folio);
 
 	gfp &= GFP_RECLAIM_MASK;
@@ -913,6 +916,7 @@  noinline int __filemap_add_folio(struct address_space *mapping,
 			}
 		}
 
+		VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
 		xas_store(&xas, folio);
 		if (xas_error(&xas))
 			goto unlock;