diff mbox series

[linux-next,2/2] mm: khugepaged: don't have to put being freed page back to lru

Message ID 1588200982-69492-2-git-send-email-yang.shi@linux.alibaba.com (mailing list archive)
State New, archived
Headers show
Series [linux-next,1/2] mm: khugepaged: add exceed_max_ptes_* helpers | expand

Commit Message

Yang Shi April 29, 2020, 10:56 p.m. UTC
When khugepaged successfully isolated and copied data from base page to
collapsed THP, the base page is about to be freed.  So putting the page
back to lru sounds not that productive since the page might be isolated
by vmscan but it can't be reclaimed by vmscan since it can't be unmapped
by try_to_unmap() at all.

Actually khugepaged is the last user of this page so it can be freed
directly.  So, clearing active and unevictable flags, unlocking and
dropping refcount from isolate instead of calling putback_lru_page().

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/khugepaged.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Comments

Yang Shi April 30, 2020, 12:41 a.m. UTC | #1
On 4/29/20 3:56 PM, Yang Shi wrote:
> When khugepaged successfully isolated and copied data from base page to
> collapsed THP, the base page is about to be freed.  So putting the page
> back to lru sounds not that productive since the page might be isolated
> by vmscan but it can't be reclaimed by vmscan since it can't be unmapped
> by try_to_unmap() at all.
>
> Actually khugepaged is the last user of this page so it can be freed
> directly.  So, clearing active and unevictable flags, unlocking and
> dropping refcount from isolate instead of calling putback_lru_page().

Please disregard the patch. I just remembered Kirill added collapse 
shared pages support. If the pages are shared then they have to be put 
back to lru since they may be still mapped by other processes. So we 
need check the mapcount if we would like to skip lru.

And I spotted the other issue. The release_pte_page() calls 
mod_node_page_state() unconditionally, it was fine before. But, due to 
the support for collapsing shared pages we need check if the last 
mapcount is gone or not.

Andrew, would you please remove this patch from -mm tree? I will send 
one or two rectified patches. Sorry for the inconvenience.

>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
> ---
>   mm/khugepaged.c | 15 +++++++++++++--
>   1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0c8d30b..c131a90 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -559,6 +559,17 @@ void __khugepaged_exit(struct mm_struct *mm)
>   static void release_pte_page(struct page *page)
>   {
>   	mod_node_page_state(page_pgdat(page),
> +		NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
> +	ClearPageActive(page);
> +	ClearPageUnevictable(page);
> +	unlock_page(page);
> +	/* Drop refcount from isolate */
> +	put_page(page);
> +}
> +
> +static void release_pte_page_to_lru(struct page *page)
> +{
> +	mod_node_page_state(page_pgdat(page),
>   			NR_ISOLATED_ANON + page_is_file_lru(page),
>   			-compound_nr(page));
>   	unlock_page(page);
> @@ -576,12 +587,12 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte,
>   		page = pte_page(pteval);
>   		if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) &&
>   				!PageCompound(page))
> -			release_pte_page(page);
> +			release_pte_page_to_lru(page);
>   	}
>   
>   	list_for_each_entry_safe(page, tmp, compound_pagelist, lru) {
>   		list_del(&page->lru);
> -		release_pte_page(page);
> +		release_pte_page_to_lru(page);
>   	}
>   }
>
Yang Shi April 30, 2020, 12:47 a.m. UTC | #2
On 4/29/20 5:41 PM, Yang Shi wrote:
>
>
> On 4/29/20 3:56 PM, Yang Shi wrote:
>> When khugepaged successfully isolated and copied data from base page to
>> collapsed THP, the base page is about to be freed.  So putting the page
>> back to lru sounds not that productive since the page might be isolated
>> by vmscan but it can't be reclaimed by vmscan since it can't be unmapped
>> by try_to_unmap() at all.
>>
>> Actually khugepaged is the last user of this page so it can be freed
>> directly.  So, clearing active and unevictable flags, unlocking and
>> dropping refcount from isolate instead of calling putback_lru_page().
>
> Please disregard the patch. I just remembered Kirill added collapse 
> shared pages support. If the pages are shared then they have to be put 
> back to lru since they may be still mapped by other processes. So we 
> need check the mapcount if we would like to skip lru.
>
> And I spotted the other issue. The release_pte_page() calls 
> mod_node_page_state() unconditionally, it was fine before. But, due to 
> the support for collapsing shared pages we need check if the last 
> mapcount is gone or not.

Hmm... this is false. I mixed up NR_ISOLATED_ANON and NR_ANON_MAPPED.

>
> Andrew, would you please remove this patch from -mm tree? I will send 
> one or two rectified patches. Sorry for the inconvenience.
>
>>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>>   mm/khugepaged.c | 15 +++++++++++++--
>>   1 file changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 0c8d30b..c131a90 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -559,6 +559,17 @@ void __khugepaged_exit(struct mm_struct *mm)
>>   static void release_pte_page(struct page *page)
>>   {
>>       mod_node_page_state(page_pgdat(page),
>> +        NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
>> +    ClearPageActive(page);
>> +    ClearPageUnevictable(page);
>> +    unlock_page(page);
>> +    /* Drop refcount from isolate */
>> +    put_page(page);
>> +}
>> +
>> +static void release_pte_page_to_lru(struct page *page)
>> +{
>> +    mod_node_page_state(page_pgdat(page),
>>               NR_ISOLATED_ANON + page_is_file_lru(page),
>>               -compound_nr(page));
>>       unlock_page(page);
>> @@ -576,12 +587,12 @@ static void release_pte_pages(pte_t *pte, pte_t 
>> *_pte,
>>           page = pte_page(pteval);
>>           if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) &&
>>                   !PageCompound(page))
>> -            release_pte_page(page);
>> +            release_pte_page_to_lru(page);
>>       }
>>         list_for_each_entry_safe(page, tmp, compound_pagelist, lru) {
>>           list_del(&page->lru);
>> -        release_pte_page(page);
>> +        release_pte_page_to_lru(page);
>>       }
>>   }
>
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0c8d30b..c131a90 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -559,6 +559,17 @@  void __khugepaged_exit(struct mm_struct *mm)
 static void release_pte_page(struct page *page)
 {
 	mod_node_page_state(page_pgdat(page),
+		NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
+	ClearPageActive(page);
+	ClearPageUnevictable(page);
+	unlock_page(page);
+	/* Drop refcount from isolate */
+	put_page(page);
+}
+
+static void release_pte_page_to_lru(struct page *page)
+{
+	mod_node_page_state(page_pgdat(page),
 			NR_ISOLATED_ANON + page_is_file_lru(page),
 			-compound_nr(page));
 	unlock_page(page);
@@ -576,12 +587,12 @@  static void release_pte_pages(pte_t *pte, pte_t *_pte,
 		page = pte_page(pteval);
 		if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) &&
 				!PageCompound(page))
-			release_pte_page(page);
+			release_pte_page_to_lru(page);
 	}
 
 	list_for_each_entry_safe(page, tmp, compound_pagelist, lru) {
 		list_del(&page->lru);
-		release_pte_page(page);
+		release_pte_page_to_lru(page);
 	}
 }