diff mbox series

[08/10] mm/hugetlb: return non-isolated page in the loop instead of break and check

Message ID 20200807091251.12129-9-richard.weiyang@linux.alibaba.com (mailing list archive)
State New, archived
Headers show
Series mm/hugetlb: code refine and simplification | expand

Commit Message

Wei Yang Aug. 7, 2020, 9:12 a.m. UTC
Function dequeue_huge_page_node_exact() iterates the free list and
return the first non-isolated one.

Instead of break and check the loop variant, we could return in the loop
directly. This could reduce some redundant check.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 mm/hugetlb.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

Comments

Baoquan He Aug. 7, 2020, 1:09 p.m. UTC | #1
On 08/07/20 at 05:12pm, Wei Yang wrote:
> Function dequeue_huge_page_node_exact() iterates the free list and
> return the first non-isolated one.
> 
> Instead of break and check the loop variant, we could return in the loop
> directly. This could reduce some redundant check.
> 
> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> ---
>  mm/hugetlb.c | 26 ++++++++++++--------------
>  1 file changed, 12 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b8e844911b5a..9473eb6800e9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1035,20 +1035,18 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>  {
>  	struct page *page;
>  
> -	list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
> -		if (!PageHWPoison(page))
> -			break;

I don't see how it can reduce redundant check, just two different
styles.

> -	/*
> -	 * if 'non-isolated free hugepage' not found on the list,
> -	 * the allocation fails.

But the above code comment seems stale, it checks HWPoision page
directly, but not the old isolated page checking.

> -	 */
> -	if (&h->hugepage_freelists[nid] == &page->lru)
> -		return NULL;
> -	list_move(&page->lru, &h->hugepage_activelist);
> -	set_page_refcounted(page);
> -	h->free_huge_pages--;
> -	h->free_huge_pages_node[nid]--;
> -	return page;
> +	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
> +		if (PageHWPoison(page))
> +			continue;
> +
> +		list_move(&page->lru, &h->hugepage_activelist);
> +		set_page_refcounted(page);
> +		h->free_huge_pages--;
> +		h->free_huge_pages_node[nid]--;
> +		return page;
> +	}
> +
> +	return NULL;
>  }
>  
>  static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid,
> -- 
> 2.20.1 (Apple Git-117)
> 
>
Wei Yang Aug. 7, 2020, 2:32 p.m. UTC | #2
On Fri, Aug 07, 2020 at 09:09:31PM +0800, Baoquan He wrote:
>On 08/07/20 at 05:12pm, Wei Yang wrote:
>> Function dequeue_huge_page_node_exact() iterates the free list and
>> return the first non-isolated one.
>> 
>> Instead of break and check the loop variant, we could return in the loop
>> directly. This could reduce some redundant check.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>> ---
>>  mm/hugetlb.c | 26 ++++++++++++--------------
>>  1 file changed, 12 insertions(+), 14 deletions(-)
>> 
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index b8e844911b5a..9473eb6800e9 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1035,20 +1035,18 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>>  {
>>  	struct page *page;
>>  
>> -	list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
>> -		if (!PageHWPoison(page))
>> -			break;
>
>I don't see how it can reduce redundant check, just two different
>styles.
>

list_for_each_entry() stops by checking whether the item reach list head.

>> -	/*
>> -	 * if 'non-isolated free hugepage' not found on the list,
>> -	 * the allocation fails.
>
>But the above code comment seems stale, it checks HWPoision page()
>directly, but not the old isolated page checking.
>
>> -	 */
>> -	if (&h->hugepage_freelists[nid] == &page->lru)
>> -		return NULL;

And here the check is done again, if we really iterate the whole list.

By take the code in the loop, we can avoid this check.

>> -	list_move(&page->lru, &h->hugepage_activelist);
>> -	set_page_refcounted(page);
>> -	h->free_huge_pages--;
>> -	h->free_huge_pages_node[nid]--;
>> -	return page;
>> +	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
>> +		if (PageHWPoison(page))
>> +			continue;
>> +
>> +		list_move(&page->lru, &h->hugepage_activelist);
>> +		set_page_refcounted(page);
>> +		h->free_huge_pages--;
>> +		h->free_huge_pages_node[nid]--;
>> +		return page;
>> +	}
>> +
>> +	return NULL;
>>  }
>>  
>>  static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid,
>> -- 
>> 2.20.1 (Apple Git-117)
>> 
>>
Mike Kravetz Aug. 10, 2020, 10:55 p.m. UTC | #3
On 8/7/20 2:12 AM, Wei Yang wrote:
> Function dequeue_huge_page_node_exact() iterates the free list and
> return the first non-isolated one.
> 
> Instead of break and check the loop variant, we could return in the loop
> directly. This could reduce some redundant check.
> 
> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>

I agree with Baoquan He in that this is more of a style change.  Certainly
there is the potential to avoid an extra check and that is always good.

The real value in this patch (IMO) is removal of the stale comment.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b8e844911b5a..9473eb6800e9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1035,20 +1035,18 @@  static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 {
 	struct page *page;
 
-	list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
-		if (!PageHWPoison(page))
-			break;
-	/*
-	 * if 'non-isolated free hugepage' not found on the list,
-	 * the allocation fails.
-	 */
-	if (&h->hugepage_freelists[nid] == &page->lru)
-		return NULL;
-	list_move(&page->lru, &h->hugepage_activelist);
-	set_page_refcounted(page);
-	h->free_huge_pages--;
-	h->free_huge_pages_node[nid]--;
-	return page;
+	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
+		if (PageHWPoison(page))
+			continue;
+
+		list_move(&page->lru, &h->hugepage_activelist);
+		set_page_refcounted(page);
+		h->free_huge_pages--;
+		h->free_huge_pages_node[nid]--;
+		return page;
+	}
+
+	return NULL;
 }
 
 static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid,