diff mbox series

Fixup patch for [PATCH 0/2] generic_file_buffered_read() refactoring & optimization

Message ID 20200630001233.GA39358@moria.home.lan (mailing list archive)
State New, archived
Headers show
Series Fixup patch for [PATCH 0/2] generic_file_buffered_read() refactoring & optimization | expand

Commit Message

Kent Overstreet June 30, 2020, 12:12 a.m. UTC
Andrew - fixup patch because I got a bug report where we were trying to do an
order 7 allocation here:

-- >8 --
Subject: [PATCH] fixup! fs: generic_file_buffered_read() now uses
 find_get_pages_contig

We shouldn't try to pin too many pages at once, reads can be almost
arbitrarily big.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
---
 mm/filemap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index d8bd5e9647..b3a2aad1b7 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2220,8 +2220,9 @@  static ssize_t generic_file_buffered_read(struct kiocb *iocb,
 	struct inode *inode = mapping->host;
 	size_t orig_count = iov_iter_count(iter);
 	struct page *pages_onstack[8], **pages = NULL;
-	unsigned int nr_pages = ((iocb->ki_pos + iter->count + PAGE_SIZE - 1) >> PAGE_SHIFT) -
-		(iocb->ki_pos >> PAGE_SHIFT);
+	unsigned int nr_pages = min_t(unsigned int, 512,
+			((iocb->ki_pos + iter->count + PAGE_SIZE - 1) >> PAGE_SHIFT) -
+			(iocb->ki_pos >> PAGE_SHIFT));
 	int i, pg_nr, error = 0;
 	bool writably_mapped;
 	loff_t isize, end_offset;