diff mbox series

[2/3] mm: fix long time stall from mm_populate

Message ID 20200212221614.215302-2-minchan@kernel.org (mailing list archive)
State New, archived
Headers show
Series [1/3] mm: Don't bother dropping mmap_sem for zero size readahead | expand

Commit Message

Minchan Kim Feb. 12, 2020, 10:16 p.m. UTC
Basically, fault handler releases mmap_sem before requesting readahead
and then it is supposed to retry lookup the page from page cache with
FAULT_FLAG_TRIED so that it avoids the live lock of infinite retry.

However, what happens if the fault handler find a page from page
cache and the page has readahead marker but are waiting under
writeback? Plus one more condition, it happens under mm_populate
which repeats faulting unless it encounters error. So let's assemble
conditions below.

__mm_populate
for (...)
  get_user_pages(faluty_address)
    handle_mm_fault
      filemap_fault
        find a page form page(PG_uptodate|PG_readahead|PG_writeback)
	it will return VM_FAULT_RETRY
  continue with faulty_address

IOW, it will repeat fault retry logic until the page will be written
back in the long run. It makes big spike latency of several seconds.

This patch solves the issue by turning off fault retry logic in second
trial.

Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/gup.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Comments

Andrew Morton April 21, 2020, 4:12 a.m. UTC | #1
On Wed, 12 Feb 2020 14:16:13 -0800 Minchan Kim <minchan@kernel.org> wrote:

> Basically, fault handler releases mmap_sem before requesting readahead
> and then it is supposed to retry lookup the page from page cache with
> FAULT_FLAG_TRIED so that it avoids the live lock of infinite retry.
> 
> However, what happens if the fault handler find a page from page
> cache and the page has readahead marker but are waiting under
> writeback? Plus one more condition, it happens under mm_populate
> which repeats faulting unless it encounters error. So let's assemble
> conditions below.
> 
> __mm_populate
> for (...)
>   get_user_pages(faluty_address)
>     handle_mm_fault
>       filemap_fault
>         find a page form page(PG_uptodate|PG_readahead|PG_writeback)
> 	it will return VM_FAULT_RETRY
>   continue with faulty_address
> 
> IOW, it will repeat fault retry logic until the page will be written
> back in the long run. It makes big spike latency of several seconds.
> 
> This patch solves the issue by turning off fault retry logic in second
> trial.

I guess a resend would be helpful, to refresh memories.

Please mention in the changelog whether this "long time stall" has been
observed in practice.  If so, under what conditions, how often and how
long was it?  etcetera.
diff mbox series

Patch

diff --git a/mm/gup.c b/mm/gup.c
index 1b521e0ac1de..b3f825092abf 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1196,6 +1196,7 @@  int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
 	struct vm_area_struct *vma = NULL;
 	int locked = 0;
 	long ret = 0;
+	bool tried = false;
 
 	end = start + len;
 
@@ -1226,14 +1227,18 @@  int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
 		 * double checks the vma flags, so that it won't mlock pages
 		 * if the vma was already munlocked.
 		 */
-		ret = populate_vma_page_range(vma, nstart, nend, &locked);
+		ret = populate_vma_page_range(vma, nstart, nend,
+						tried ? NULL : &locked);
 		if (ret < 0) {
 			if (ignore_errors) {
 				ret = 0;
 				continue;	/* continue at next VMA */
 			}
 			break;
-		}
+		} else if (ret == 0)
+			tried = true;
+		else
+			tried = false;
 		nend = nstart + ret * PAGE_SIZE;
 		ret = 0;
 	}