diff mbox series

mm/huge_memory: remove unneeded local variable follflags

Message ID 20220310131253.30970-1-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series mm/huge_memory: remove unneeded local variable follflags | expand

Commit Message

Miaohe Lin March 10, 2022, 1:12 p.m. UTC
We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
the code a bit.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Anshuman Khandual March 11, 2022, 4:51 a.m. UTC | #1
Hi Miaohe,

On 3/10/22 18:42, Miaohe Lin wrote:
> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
> the code a bit.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/huge_memory.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3557aabe86fe..418d077da246 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>  	 */
>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
>  		struct vm_area_struct *vma = find_vma(mm, addr);
> -		unsigned int follflags;
>  		struct page *page;
>  
>  		if (!vma || addr < vma->vm_start)
> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>  		}
>  
>  		/* FOLL_DUMP to ignore special (like zero) pages */
> -		follflags = FOLL_GET | FOLL_DUMP;
> -		page = follow_page(vma, addr, follflags);
> +		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
>  
>  		if (IS_ERR(page))
>  			continue;

LGTM, but there is another similar instance in add_page_for_migration()
inside mm/migrate.c, requiring this exact clean up.

Hence with that change in place.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Miaohe Lin March 11, 2022, 6:26 a.m. UTC | #2
On 2022/3/11 12:51, Anshuman Khandual wrote:
> Hi Miaohe,
> 
> On 3/10/22 18:42, Miaohe Lin wrote:
>> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
>> the code a bit.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/huge_memory.c | 4 +---
>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 3557aabe86fe..418d077da246 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>  	 */
>>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
>>  		struct vm_area_struct *vma = find_vma(mm, addr);
>> -		unsigned int follflags;
>>  		struct page *page;
>>  
>>  		if (!vma || addr < vma->vm_start)
>> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>  		}
>>  
>>  		/* FOLL_DUMP to ignore special (like zero) pages */
>> -		follflags = FOLL_GET | FOLL_DUMP;
>> -		page = follow_page(vma, addr, follflags);
>> +		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
>>  
>>  		if (IS_ERR(page))
>>  			continue;
> 
> LGTM, but there is another similar instance in add_page_for_migration()
> inside mm/migrate.c, requiring this exact clean up.
> 

Thanks for comment. That similar case is done in my previous patch series[1]
aimed at migration cleanup and fixup. It might be more suitable to do that
clean up in that specialized series?

[1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/

> Hence with that change in place.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

Thanks again.

> .
>
Anshuman Khandual March 11, 2022, 6:39 a.m. UTC | #3
On 3/11/22 11:56, Miaohe Lin wrote:
> On 2022/3/11 12:51, Anshuman Khandual wrote:
>> Hi Miaohe,
>>
>> On 3/10/22 18:42, Miaohe Lin wrote:
>>> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
>>> the code a bit.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>>  mm/huge_memory.c | 4 +---
>>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 3557aabe86fe..418d077da246 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>>  	 */
>>>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
>>>  		struct vm_area_struct *vma = find_vma(mm, addr);
>>> -		unsigned int follflags;
>>>  		struct page *page;
>>>  
>>>  		if (!vma || addr < vma->vm_start)
>>> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>>  		}
>>>  
>>>  		/* FOLL_DUMP to ignore special (like zero) pages */
>>> -		follflags = FOLL_GET | FOLL_DUMP;
>>> -		page = follow_page(vma, addr, follflags);
>>> +		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
>>>  
>>>  		if (IS_ERR(page))
>>>  			continue;
>>
>> LGTM, but there is another similar instance in add_page_for_migration()
>> inside mm/migrate.c, requiring this exact clean up.
>>
> 
> Thanks for comment. That similar case is done in my previous patch series[1]
> aimed at migration cleanup and fixup. It might be more suitable to do that
> clean up in that specialized series?

Both these similar scenarios i.e the one proposed here and other one in the
migration series, should be folded into a separate single patch, either here
or in the series itself.

> 
> [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/
> 
>> Hence with that change in place.
>>
>> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> 
> Thanks again.
> 
>> .
>>
>
Miaohe Lin March 11, 2022, 7:01 a.m. UTC | #4
On 2022/3/11 14:39, Anshuman Khandual wrote:
> 
> 
> On 3/11/22 11:56, Miaohe Lin wrote:
>> On 2022/3/11 12:51, Anshuman Khandual wrote:
>>> Hi Miaohe,
>>>
>>> On 3/10/22 18:42, Miaohe Lin wrote:
>>>> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
>>>> the code a bit.
>>>>
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>>  mm/huge_memory.c | 4 +---
>>>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>>>
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index 3557aabe86fe..418d077da246 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>>>  	 */
>>>>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
>>>>  		struct vm_area_struct *vma = find_vma(mm, addr);
>>>> -		unsigned int follflags;
>>>>  		struct page *page;
>>>>  
>>>>  		if (!vma || addr < vma->vm_start)
>>>> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>>>>  		}
>>>>  
>>>>  		/* FOLL_DUMP to ignore special (like zero) pages */
>>>> -		follflags = FOLL_GET | FOLL_DUMP;
>>>> -		page = follow_page(vma, addr, follflags);
>>>> +		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
>>>>  
>>>>  		if (IS_ERR(page))
>>>>  			continue;
>>>
>>> LGTM, but there is another similar instance in add_page_for_migration()
>>> inside mm/migrate.c, requiring this exact clean up.
>>>
>>
>> Thanks for comment. That similar case is done in my previous patch series[1]
>> aimed at migration cleanup and fixup. It might be more suitable to do that
>> clean up in that specialized series?
> 
> Both these similar scenarios i.e the one proposed here and other one in the
> migration series, should be folded into a separate single patch, either here
> or in the series itself.

Looks fine to me. Will do. Thanks.

> 
>>
>> [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/
>>
>>> Hence with that change in place.
>>>
>>> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>
>> Thanks again.
>>
>>> .
>>>
>>
> .
>
David Hildenbrand March 11, 2022, 9:56 a.m. UTC | #5
On 10.03.22 14:12, Miaohe Lin wrote:
> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify
> the code a bit.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/huge_memory.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3557aabe86fe..418d077da246 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>  	 */
>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
>  		struct vm_area_struct *vma = find_vma(mm, addr);
> -		unsigned int follflags;
>  		struct page *page;
>  
>  		if (!vma || addr < vma->vm_start)
> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>  		}
>  
>  		/* FOLL_DUMP to ignore special (like zero) pages */
> -		follflags = FOLL_GET | FOLL_DUMP;
> -		page = follow_page(vma, addr, follflags);
> +		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
>  
>  		if (IS_ERR(page))
>  			continue;

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3557aabe86fe..418d077da246 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2838,7 +2838,6 @@  static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
 	 */
 	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
 		struct vm_area_struct *vma = find_vma(mm, addr);
-		unsigned int follflags;
 		struct page *page;
 
 		if (!vma || addr < vma->vm_start)
@@ -2851,8 +2850,7 @@  static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
 		}
 
 		/* FOLL_DUMP to ignore special (like zero) pages */
-		follflags = FOLL_GET | FOLL_DUMP;
-		page = follow_page(vma, addr, follflags);
+		page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
 
 		if (IS_ERR(page))
 			continue;