diff mbox series

[RFC,16/23] readahead: add folio with at least mapping_min_order in page_cache_ra_order

Message ID 20230915183848.1018717-17-kernel@pankajraghav.com (mailing list archive)
State New, archived
Headers show
Series Enable block size > page size in XFS | expand

Commit Message

Pankaj Raghav (Samsung) Sept. 15, 2023, 6:38 p.m. UTC
From: Luis Chamberlain <mcgrof@kernel.org>

Set the folio order to at least mapping_min_order before calling
ra_alloc_folio().

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 mm/readahead.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/readahead.c b/mm/readahead.c
index 838dd9ca8dad..fb5ff180c39e 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -506,6 +506,7 @@  void page_cache_ra_order(struct readahead_control *ractl,
 {
 	struct address_space *mapping = ractl->mapping;
 	pgoff_t index = readahead_index(ractl);
+	unsigned int min_order = mapping_min_folio_order(mapping);
 	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
 	pgoff_t mark = index + ra->size - ra->async_size;
 	int err = 0;
@@ -535,10 +536,16 @@  void page_cache_ra_order(struct readahead_control *ractl,
 				order = 0;
 		}
 		/* Don't allocate pages past EOF */
-		while (index + (1UL << order) - 1 > limit) {
+		while (order > min_order && index + (1UL << order) - 1 > limit) {
 			if (--order == 1)
 				order = 0;
 		}
+
+		if (order < min_order)
+			order = min_order;
+
+		VM_BUG_ON(index & ((1UL << order) - 1));
+
 		err = ra_alloc_folio(ractl, index, mark, order, gfp);
 		if (err)
 			break;