diff mbox series

mm, hwpoison, hugetlb: Free hwpoison huge page to list tail and dissolve hwpoison huge page first

Message ID 20220802100711.2425644-1-luofei@unicloud.com (mailing list archive)
State New
Headers show
Series mm, hwpoison, hugetlb: Free hwpoison huge page to list tail and dissolve hwpoison huge page first | expand

Commit Message

luofei Aug. 2, 2022, 10:07 a.m. UTC
If free hwpoison huge page to the end of hugepage_freelists, the
loop can exit directly when the hwpoison huge page is traversed,
which can effectively reduce the repeated traversal of the hwpoison
huge page. Meanwhile, when free the free huge pages to lower level
allocators, if hwpoison ones are released first, this can improve
the effecvive utilization rate of huge page.

Signed-off-by: luofei <luofei@unicloud.com>
---
 mm/hugetlb.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Comments

Mike Kravetz Aug. 2, 2022, 4:58 p.m. UTC | #1
On 08/02/22 06:07, luofei wrote:
> If free hwpoison huge page to the end of hugepage_freelists, the
> loop can exit directly when the hwpoison huge page is traversed,
> which can effectively reduce the repeated traversal of the hwpoison
> huge page. Meanwhile, when free the free huge pages to lower level
> allocators, if hwpoison ones are released first, this can improve
> the effecvive utilization rate of huge page.

In general, I think this is a good idea.  Although, it seems that with
recent changes to hugetlb poisioning code we are even less likely to
have a poisioned page on hugetlb free lists.

Adding Naoya and Miaohe as they have been looking at page poison of hugetlb
pages recently.

> Signed-off-by: luofei <luofei@unicloud.com>
> ---
>  mm/hugetlb.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 28516881a1b2..ca72220eedd9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1116,7 +1116,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
>  	lockdep_assert_held(&hugetlb_lock);
>  	VM_BUG_ON_PAGE(page_count(page), page);
>  
> -	list_move(&page->lru, &h->hugepage_freelists[nid]);
> +	if (unlikely(PageHWPoison(page)))
> +		list_move_tail(&page->lru, &h->hugepage_freelists[nid]);
> +	else
> +		list_move(&page->lru, &h->hugepage_freelists[nid]);
>  	h->free_huge_pages++;
>  	h->free_huge_pages_node[nid]++;
>  	SetHPageFreed(page);
> @@ -1133,7 +1136,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>  			continue;
>  
>  		if (PageHWPoison(page))
> -			continue;
> +			break;

IIRC, it is 'possible' to unpoison a page via the debug/testing interfaces.
If so, then we could end up with free unpoisioned page(s) at the end of
the list that would never be used because we quit when encountering a
poisioned page.

Naoya and Miaohe would know for sure.

Same possible issue in demote_pool_huge_page().
Miaohe Lin Aug. 4, 2022, 2:08 a.m. UTC | #2
On 2022/8/3 0:58, Mike Kravetz wrote:
> On 08/02/22 06:07, luofei wrote:
>> If free hwpoison huge page to the end of hugepage_freelists, the
>> loop can exit directly when the hwpoison huge page is traversed,
>> which can effectively reduce the repeated traversal of the hwpoison
>> huge page. Meanwhile, when free the free huge pages to lower level
>> allocators, if hwpoison ones are released first, this can improve
>> the effecvive utilization rate of huge page.
> 
> In general, I think this is a good idea.  Although, it seems that with
> recent changes to hugetlb poisioning code we are even less likely to
> have a poisioned page on hugetlb free lists.
> 
> Adding Naoya and Miaohe as they have been looking at page poison of hugetlb
> pages recently.
> 
>> Signed-off-by: luofei <luofei@unicloud.com>
>> ---
>>  mm/hugetlb.c | 13 ++++++++-----
>>  1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 28516881a1b2..ca72220eedd9 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1116,7 +1116,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
>>  	lockdep_assert_held(&hugetlb_lock);
>>  	VM_BUG_ON_PAGE(page_count(page), page);
>>  
>> -	list_move(&page->lru, &h->hugepage_freelists[nid]);
>> +	if (unlikely(PageHWPoison(page)))
>> +		list_move_tail(&page->lru, &h->hugepage_freelists[nid]);
>> +	else
>> +		list_move(&page->lru, &h->hugepage_freelists[nid]);
>>  	h->free_huge_pages++;
>>  	h->free_huge_pages_node[nid]++;
>>  	SetHPageFreed(page);
>> @@ -1133,7 +1136,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>>  			continue;
>>  
>>  		if (PageHWPoison(page))
>> -			continue;
>> +			break;
> 
> IIRC, it is 'possible' to unpoison a page via the debug/testing interfaces.
> If so, then we could end up with free unpoisioned page(s) at the end of
> the list that would never be used because we quit when encountering a
> poisioned page.

IIUC, above scene will actually happen. What's more, dissolve_free_huge_page might fail after hugetlb
page is hwpoisoned due to e.g. all hugetlb pages are reserved. In that case, the hwpoisoned hugetlb page
is not moved to the tail of hugepage_freelists and thus cause problems.

Thanks both.

> 
> Naoya and Miaohe would know for sure.
> 
> Same possible issue in demote_pool_huge_page().
>
luofei Aug. 4, 2022, 5:32 a.m. UTC | #3
>>> Signed-off-by: luofei <luofei@unicloud.com>
>>> ---
>>>  mm/hugetlb.c | 13 ++++++++-----
>>>  1 file changed, 8 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 28516881a1b2..ca72220eedd9 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -1116,7 +1116,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
>>>       lockdep_assert_held(&hugetlb_lock);
>>>       VM_BUG_ON_PAGE(page_count(page), page);
>>>
>>> -    list_move(&page->lru, &h->hugepage_freelists[nid]);
>>> +    if (unlikely(PageHWPoison(page)))
>>> +            list_move_tail(&page->lru, &h->hugepage_freelists[nid]);
>>> +    else
>>> +            list_move(&page->lru, &h->hugepage_freelists[nid]);
>>>       h->free_huge_pages++;
>>>       h->free_huge_pages_node[nid]++;
>>>       SetHPageFreed(page);
>>> @@ -1133,7 +1136,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>>>                       continue;
>>>
>>>               if (PageHWPoison(page))
>>> -                    continue;
>>> +                    break;
>>
>> IIRC, it is 'possible' to unpoison a page via the debug/testing interfaces.
>> If so, then we could end up with free unpoisioned page(s) at the end of
>> the list that would never be used because we quit when encountering a
>> poisioned page.
>
>IIUC, above scene will actually happen. What's more, dissolve_free_huge_page might fail after hugetlb
>page is hwpoisoned due to e.g. all hugetlb pages are reserved. In that case, the hwpoisoned hugetlb page
>is not moved to the tail of hugepage_freelists and thus cause problems.

Yes, both cases could happen.
I think the key problem is when the hugepage already in the free list, and if the hwpoison
flag changed at this time(such as unpoison or mce event), after the processing is done,
and the huge page still in free list, it should be deleted and reinserted into correct position.

If the huge page may still exist in the free list after recent changes to hugetlb poisoning code,
I will resubmit a new patch.

Thanks.
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 28516881a1b2..ca72220eedd9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1116,7 +1116,10 @@  static void enqueue_huge_page(struct hstate *h, struct page *page)
 	lockdep_assert_held(&hugetlb_lock);
 	VM_BUG_ON_PAGE(page_count(page), page);
 
-	list_move(&page->lru, &h->hugepage_freelists[nid]);
+	if (unlikely(PageHWPoison(page)))
+		list_move_tail(&page->lru, &h->hugepage_freelists[nid]);
+	else
+		list_move(&page->lru, &h->hugepage_freelists[nid]);
 	h->free_huge_pages++;
 	h->free_huge_pages_node[nid]++;
 	SetHPageFreed(page);
@@ -1133,7 +1136,7 @@  static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 			continue;
 
 		if (PageHWPoison(page))
-			continue;
+			break;
 
 		list_move(&page->lru, &h->hugepage_activelist);
 		set_page_refcounted(page);
@@ -2045,7 +2048,7 @@  static struct page *remove_pool_huge_page(struct hstate *h,
 		 */
 		if ((!acct_surplus || h->surplus_huge_pages_node[node]) &&
 		    !list_empty(&h->hugepage_freelists[node])) {
-			page = list_entry(h->hugepage_freelists[node].next,
+			page = list_entry(h->hugepage_freelists[node].prev,
 					  struct page, lru);
 			remove_hugetlb_page(h, page, acct_surplus);
 			break;
@@ -3210,7 +3213,7 @@  static void try_to_free_low(struct hstate *h, unsigned long count,
 	for_each_node_mask(i, *nodes_allowed) {
 		struct page *page, *next;
 		struct list_head *freel = &h->hugepage_freelists[i];
-		list_for_each_entry_safe(page, next, freel, lru) {
+		list_for_each_entry_safe_reverse(page, next, freel, lru) {
 			if (count >= h->nr_huge_pages)
 				goto out;
 			if (PageHighMem(page))
@@ -3494,7 +3497,7 @@  static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
 		list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
 			if (PageHWPoison(page))
-				continue;
+				break;
 
 			return demote_free_huge_page(h, page);
 		}