diff mbox series

[PATCHv2,4/8] khugepaged: Drain LRU add pagevec after swapin

Message ID 20200403112928.19742-5-kirill.shutemov@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series thp/khugepaged improvements and CoW semantics | expand

Commit Message

Kirill A . Shutemov April 3, 2020, 11:29 a.m. UTC
__collapse_huge_page_isolate() may fail due to extra pin in the LRU add
pagevec. It's petty common for swapin case: we swap in pages just to
fail due to the extra pin.

Drain LRU add pagevec on sucessfull swapin.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/khugepaged.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Zi Yan April 6, 2020, 1:11 p.m. UTC | #1
On 3 Apr 2020, at 7:29, Kirill A. Shutemov wrote:

> External email: Use caution opening links or attachments
>
>
> __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
> pagevec. It's petty common for swapin case: we swap in pages just to

s/petty/pretty

> fail due to the extra pin.
>
> Drain LRU add pagevec on sucessfull swapin.

s/sucessfull/successful

>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>  mm/khugepaged.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index fdc10ffde1ca..57ff287caf6b 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>         }
>         vmf.pte--;
>         pte_unmap(vmf.pte);
> +
> +       /* Drain LRU add pagevec to remove extra pin on the swapped in pages */
> +       if (swapped_in)
> +               lru_add_drain();
> +
>         trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1);
>         return true;
>  }
> --
> 2.26.0


—
Best Regards,
Yan Zi
Yang Shi April 6, 2020, 6:29 p.m. UTC | #2
On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
> __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
> pagevec. It's petty common for swapin case: we swap in pages just to
> fail due to the extra pin.
>
> Drain LRU add pagevec on sucessfull swapin.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>   mm/khugepaged.c | 5 +++++
>   1 file changed, 5 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index fdc10ffde1ca..57ff287caf6b 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>   	}
>   	vmf.pte--;
>   	pte_unmap(vmf.pte);
> +
> +	/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
> +	if (swapped_in)
> +		lru_add_drain();

There is already lru_add_drain() called in swap readahead path, please 
see swap_vma_readahead() and swap_cluster_readahead().

The extra call to draining lru seems unnecessary.

> +
>   	trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1);
>   	return true;
>   }
Kirill A . Shutemov April 8, 2020, 1:05 p.m. UTC | #3
On Mon, Apr 06, 2020 at 11:29:11AM -0700, Yang Shi wrote:
> 
> 
> On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
> > __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
> > pagevec. It's petty common for swapin case: we swap in pages just to
> > fail due to the extra pin.
> > 
> > Drain LRU add pagevec on sucessfull swapin.
> > 
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > ---
> >   mm/khugepaged.c | 5 +++++
> >   1 file changed, 5 insertions(+)
> > 
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index fdc10ffde1ca..57ff287caf6b 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
> >   	}
> >   	vmf.pte--;
> >   	pte_unmap(vmf.pte);
> > +
> > +	/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
> > +	if (swapped_in)
> > +		lru_add_drain();
> 
> There is already lru_add_drain() called in swap readahead path, please see
> swap_vma_readahead() and swap_cluster_readahead().

But not for synchronous case. See SWP_SYNCHRONOUS_IO branch in
do_swap_page().

Maybe we should drain it in swap_readpage() or in do_swap_page() after
swap_readpage()? I donno.
Yang Shi April 8, 2020, 6:42 p.m. UTC | #4
On 4/8/20 6:05 AM, Kirill A. Shutemov wrote:
> On Mon, Apr 06, 2020 at 11:29:11AM -0700, Yang Shi wrote:
>>
>> On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
>>> __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
>>> pagevec. It's petty common for swapin case: we swap in pages just to
>>> fail due to the extra pin.
>>>
>>> Drain LRU add pagevec on sucessfull swapin.
>>>
>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> ---
>>>    mm/khugepaged.c | 5 +++++
>>>    1 file changed, 5 insertions(+)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index fdc10ffde1ca..57ff287caf6b 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>>>    	}
>>>    	vmf.pte--;
>>>    	pte_unmap(vmf.pte);
>>> +
>>> +	/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
>>> +	if (swapped_in)
>>> +		lru_add_drain();
>> There is already lru_add_drain() called in swap readahead path, please see
>> swap_vma_readahead() and swap_cluster_readahead().
> But not for synchronous case. See SWP_SYNCHRONOUS_IO branch in
> do_swap_page().

Aha, yes. I missed the synchronous case.

>
> Maybe we should drain it in swap_readpage() or in do_swap_page() after
> swap_readpage()? I donno.

It may be better to keep it as is. Draining lru for every page for 
synchronous case in do_swap_page() path sounds not very productive. 
Doing it in khugepaged seems acceptable. We just drain lru cache again 
for non-synchronous case, but the cache may be already empty so it 
should take very short time since nothing to drain.

>
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fdc10ffde1ca..57ff287caf6b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -940,6 +940,11 @@  static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 	}
 	vmf.pte--;
 	pte_unmap(vmf.pte);
+
+	/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
+	if (swapped_in)
+		lru_add_drain();
+
 	trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1);
 	return true;
 }