diff mbox series

Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()"

Message ID 20241126145208.985-1-jack@suse.cz (mailing list archive)
State New
Headers show
Series Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()" | expand

Commit Message

Jan Kara Nov. 26, 2024, 2:52 p.m. UTC
This reverts commit 7c877586da3178974a8a94577b6045a48377ff25.

Anders and Philippe have reported that recent kernels occasionally hang
when used with NFS in readahead code. The problem has been bisected to
7c877586da3 ("readahead: properly shorten readahead when falling back to
do_page_cache_ra()"). The cause of the problem is that ra->size can be
shrunk by read_pages() call and subsequently we end up calling
do_page_cache_ra() with negative (read huge positive) number of pages.
Let's revert 7c877586da3 for now until we can find a proper way how the
logic in read_pages() and page_cache_ra_order() can coexist. This can
lead to reduced readahead throughput due to readahead window confusion
but that's better than outright hangs.

Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
Reported-by: Philippe Troin <phil@fifi.org>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/readahead.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Philippe Troin Nov. 26, 2024, 3:01 p.m. UTC | #1
On Tue, 2024-11-26 at 15:52 +0100, Jan Kara wrote:
> This reverts commit 7c877586da3178974a8a94577b6045a48377ff25.
> 
> Anders and Philippe have reported that recent kernels occasionally hang
> when used with NFS in readahead code. The problem has been bisected to
> 7c877586da3 ("readahead: properly shorten readahead when falling back to
> do_page_cache_ra()"). The cause of the problem is that ra->size can be
> shrunk by read_pages() call and subsequently we end up calling
> do_page_cache_ra() with negative (read huge positive) number of pages.
> Let's revert 7c877586da3 for now until we can find a proper way how the
> logic in read_pages() and page_cache_ra_order() can coexist. This can
> lead to reduced readahead throughput due to readahead window confusion
> but that's better than outright hangs.
> 
> Reported-by: Anders Blomdell <anders.blomdell@gmail.com>
> Reported-by: Philippe Troin <phil@fifi.org>
> CC: stable@vger.kernel.org
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/readahead.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 8f1cf599b572..ea650b8b02fb 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -458,8 +458,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
>  		struct file_ra_state *ra, unsigned int new_order)
>  {
>  	struct address_space *mapping = ractl->mapping;
> -	pgoff_t start = readahead_index(ractl);
> -	pgoff_t index = start;
> +	pgoff_t index = readahead_index(ractl);
>  	unsigned int min_order = mapping_min_folio_order(mapping);
>  	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
>  	pgoff_t mark = index + ra->size - ra->async_size;
> @@ -522,7 +521,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
>  	if (!err)
>  		return;
>  fallback:
> -	do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
> +	do_page_cache_ra(ractl, ra->size, ra->async_size);
>  }
>  
>  static unsigned long ractl_max_pages(struct readahead_control *ractl,

You can add a
   Tested-by: Philippe Troin <phil@fifi.org>
tag as I did experiment and validate the fix with that revert on top of 6.11.10.

Phil.
diff mbox series

Patch

diff --git a/mm/readahead.c b/mm/readahead.c
index 8f1cf599b572..ea650b8b02fb 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -458,8 +458,7 @@  void page_cache_ra_order(struct readahead_control *ractl,
 		struct file_ra_state *ra, unsigned int new_order)
 {
 	struct address_space *mapping = ractl->mapping;
-	pgoff_t start = readahead_index(ractl);
-	pgoff_t index = start;
+	pgoff_t index = readahead_index(ractl);
 	unsigned int min_order = mapping_min_folio_order(mapping);
 	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
 	pgoff_t mark = index + ra->size - ra->async_size;
@@ -522,7 +521,7 @@  void page_cache_ra_order(struct readahead_control *ractl,
 	if (!err)
 		return;
 fallback:
-	do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
+	do_page_cache_ra(ractl, ra->size, ra->async_size);
 }
 
 static unsigned long ractl_max_pages(struct readahead_control *ractl,