Message ID | 20180509074830.16196-8-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, May 09, 2018 at 09:48:04AM +0200, Christoph Hellwig wrote: > That way file systems don't have to go spotting for non-contiguous pages > and work around them. It also kicks off I/O earlier, allowing it to > finish earlier and reduce latency. Makes sense. > + /* > + * Page already present? Kick off the current batch of > + * contiguous pages before continueing with the next "continuing" (no 'e')
diff --git a/mm/readahead.c b/mm/readahead.c index 16d0cb1e2616..3f608e00286d 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -177,8 +177,18 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, rcu_read_lock(); page = radix_tree_lookup(&mapping->i_pages, page_offset); rcu_read_unlock(); - if (page && !radix_tree_exceptional_entry(page)) + if (page && !radix_tree_exceptional_entry(page)) { + /* + * Page already present? Kick off the current batch of + * contiguous pages before continueing with the next + * batch. + */ + if (nr_pages) + read_pages(mapping, filp, &page_pool, nr_pages, + gfp_mask); + nr_pages = 0; continue; + } page = __page_cache_alloc(gfp_mask); if (!page)
That way file systems don't have to go spotting for non-contiguous pages and work around them. It also kicks off I/O earlier, allowing it to finish earlier and reduce latency. Signed-off-by: Christoph Hellwig <hch@lst.de> --- mm/readahead.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)