diff mbox series

[v2] mm/compaction: remove unnecessary detection code.

Message ID 20241114065720.3665-1-liuq131@chinatelecom.cn (mailing list archive)
State New
Headers show
Series [v2] mm/compaction: remove unnecessary detection code. | expand

Commit Message

Qiang Liu Nov. 14, 2024, 6:57 a.m. UTC
It is impossible for the situation where blockpfn > end_pfn to arise,
The if statement here is not only unnecessary, but may also lead to
a misunderstanding that blockpfn > end_pfn could potentially happen.
so these unnecessary checking code should be removed.

Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
---
 mm/compaction.c | 6 ------
 1 file changed, 6 deletions(-)

Comments

Vlastimil Babka Nov. 14, 2024, 7:44 a.m. UTC | #1
On 11/14/24 07:57, Qiang Liu wrote:
> It is impossible for the situation where blockpfn > end_pfn to arise,
> The if statement here is not only unnecessary, but may also lead to
> a misunderstanding that blockpfn > end_pfn could potentially happen.
> so these unnecessary checking code should be removed.
> 
> Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>

I see that's since 3da0272a4c7d ("mm/compaction: correctly return failure
with bogus compound_order in strict mode")

I think that commit introduced a risk of overflow due to a bogus order
(which we read in a racy way), and once blockpfn overflows it will satisfy
<= end_pfn and might e.g. end up scanning a completely different zone?

                        if (blockpfn + (1UL << order) <= end_pfn) {

                                blockpfn += (1UL << order) - 1;
                                page += (1UL << order) - 1;
                                nr_scanned += (1UL << order) - 1;
                        }

We should better add back the MAX_ORDER sanity check?

> ---
>  mm/compaction.c | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index a2b16b08cbbf..baeda7132252 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -682,12 +682,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>  	if (locked)
>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>  
> -	/*
> -	 * Be careful to not go outside of the pageblock.
> -	 */
> -	if (unlikely(blockpfn > end_pfn))
> -		blockpfn = end_pfn;
> -
>  	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>  					nr_scanned, total_isolated);
>
Vlastimil Babka Nov. 14, 2024, 7:52 a.m. UTC | #2
On 11/14/24 08:44, Vlastimil Babka wrote:
> On 11/14/24 07:57, Qiang Liu wrote:
>> It is impossible for the situation where blockpfn > end_pfn to arise,
>> The if statement here is not only unnecessary, but may also lead to
>> a misunderstanding that blockpfn > end_pfn could potentially happen.
>> so these unnecessary checking code should be removed.
>> 
>> Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
> 
> I see that's since 3da0272a4c7d ("mm/compaction: correctly return failure
> with bogus compound_order in strict mode")

Hm but we still have:

for (; blockpfn < end_pfn; blockpfn += stride, page += stride) {

and this advance by stride can mix up with advance by isolated, initial pfn
might not be aligned... I don't see any guarantee that the for loop will
exit with exactly blockpfn == end_pfn, it may easily advance beyond end_pfn
so we shouldn't remove the check?

> I think that commit introduced a risk of overflow due to a bogus order
> (which we read in a racy way), and once blockpfn overflows it will satisfy
> <= end_pfn and might e.g. end up scanning a completely different zone?
> 
>                         if (blockpfn + (1UL << order) <= end_pfn) {
> 
>                                 blockpfn += (1UL << order) - 1;
>                                 page += (1UL << order) - 1;
>                                 nr_scanned += (1UL << order) - 1;
>                         }
> 
> We should better add back the MAX_ORDER sanity check?
> 
>> ---
>>  mm/compaction.c | 6 ------
>>  1 file changed, 6 deletions(-)
>> 
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index a2b16b08cbbf..baeda7132252 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -682,12 +682,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>  	if (locked)
>>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>>  
>> -	/*
>> -	 * Be careful to not go outside of the pageblock.
>> -	 */
>> -	if (unlikely(blockpfn > end_pfn))
>> -		blockpfn = end_pfn;
>> -
>>  	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>>  					nr_scanned, total_isolated);
>>  
> 
>
Kemeng Shi Nov. 14, 2024, 9:21 a.m. UTC | #3
Hello
on 11/14/2024 3:44 PM, Vlastimil Babka wrote:
> On 11/14/24 07:57, Qiang Liu wrote:
>> It is impossible for the situation where blockpfn > end_pfn to arise,
>> The if statement here is not only unnecessary, but may also lead to
>> a misunderstanding that blockpfn > end_pfn could potentially happen.
>> so these unnecessary checking code should be removed.
>>
>> Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
> 
As stride could 32, if isolate_freepages_range() is called with start_pfn not
aligned with 32, we could bail out look with blockpfn > end_pfn in
isolate_freepages_block(). Please correct if I miss something.
> I see that's since 3da0272a4c7d ("mm/compaction: correctly return failure
> with bogus compound_order in strict mode")
> 
> I think that commit introduced a risk of overflow due to a bogus order
> (which we read in a racy way), and once blockpfn overflows it will satisfy
> <= end_pfn and might e.g. end up scanning a completely different zone?
> 
>                         if (blockpfn + (1UL << order) <= end_pfn) {
> 
>                                 blockpfn += (1UL << order) - 1;
>                                 page += (1UL << order) - 1;
>                                 nr_scanned += (1UL << order) - 1;
>                         }
> 
> We should better add back the MAX_ORDER sanity check?
As order of pageblock is <= MAX_ORDER, if bogus order is > MAX_ORDER, then
blockpfn + (1UL << order) must be > end_pfn, I think the sanity check is
not needed.

Thanks.
Kemeng
> 
>> ---
>>  mm/compaction.c | 6 ------
>>  1 file changed, 6 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index a2b16b08cbbf..baeda7132252 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -682,12 +682,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>  	if (locked)
>>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>>  
>> -	/*
>> -	 * Be careful to not go outside of the pageblock.
>> -	 */
>> -	if (unlikely(blockpfn > end_pfn))
>> -		blockpfn = end_pfn;
>> -
>>  	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>>  					nr_scanned, total_isolated);
>>  
>
Vlastimil Babka Nov. 14, 2024, 9:37 a.m. UTC | #4
On 11/14/24 10:21, Kemeng Shi wrote:
> 
> Hello
> on 11/14/2024 3:44 PM, Vlastimil Babka wrote:
>> On 11/14/24 07:57, Qiang Liu wrote:
>>> It is impossible for the situation where blockpfn > end_pfn to arise,
>>> The if statement here is not only unnecessary, but may also lead to
>>> a misunderstanding that blockpfn > end_pfn could potentially happen.
>>> so these unnecessary checking code should be removed.
>>>
>>> Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
>> 
> As stride could 32, if isolate_freepages_range() is called with start_pfn not
> aligned with 32, we could bail out look with blockpfn > end_pfn in
> isolate_freepages_block(). Please correct if I miss something.
>> I see that's since 3da0272a4c7d ("mm/compaction: correctly return failure
>> with bogus compound_order in strict mode")
>> 
>> I think that commit introduced a risk of overflow due to a bogus order
>> (which we read in a racy way), and once blockpfn overflows it will satisfy
>> <= end_pfn and might e.g. end up scanning a completely different zone?
>> 
>>                         if (blockpfn + (1UL << order) <= end_pfn) {
>> 
>>                                 blockpfn += (1UL << order) - 1;
>>                                 page += (1UL << order) - 1;
>>                                 nr_scanned += (1UL << order) - 1;
>>                         }
>> 
>> We should better add back the MAX_ORDER sanity check?
> As order of pageblock is <= MAX_ORDER, if bogus order is > MAX_ORDER, then
> blockpfn + (1UL << order) must be > end_pfn, I think the sanity check is
> not needed.
Hm I guess we could only overflow with blockpfn being initially >= 1UL << 63
and reading a bogus order of 63.
So it can't realistically happen.

> Thanks.
> Kemeng
>> 
>>> ---
>>>  mm/compaction.c | 6 ------
>>>  1 file changed, 6 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index a2b16b08cbbf..baeda7132252 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -682,12 +682,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>>  	if (locked)
>>>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>>>  
>>> -	/*
>>> -	 * Be careful to not go outside of the pageblock.
>>> -	 */
>>> -	if (unlikely(blockpfn > end_pfn))
>>> -		blockpfn = end_pfn;
>>> -
>>>  	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>>>  					nr_scanned, total_isolated);
>>>  
>> 
>
Baolin Wang Nov. 14, 2024, 10:06 a.m. UTC | #5
On 2024/11/14 15:52, Vlastimil Babka wrote:
> On 11/14/24 08:44, Vlastimil Babka wrote:
>> On 11/14/24 07:57, Qiang Liu wrote:
>>> It is impossible for the situation where blockpfn > end_pfn to arise,
>>> The if statement here is not only unnecessary, but may also lead to
>>> a misunderstanding that blockpfn > end_pfn could potentially happen.
>>> so these unnecessary checking code should be removed.
>>>
>>> Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
>>
>> I see that's since 3da0272a4c7d ("mm/compaction: correctly return failure
>> with bogus compound_order in strict mode")
> 
> Hm but we still have:
> 
> for (; blockpfn < end_pfn; blockpfn += stride, page += stride) {
> 
> and this advance by stride can mix up with advance by isolated, initial pfn
> might not be aligned... I don't see any guarantee that the for loop will
> exit with exactly blockpfn == end_pfn, it may easily advance beyond end_pfn
> so we shouldn't remove the check?

Agreed.

>> I think that commit introduced a risk of overflow due to a bogus order
>> (which we read in a racy way), and once blockpfn overflows it will satisfy
>> <= end_pfn and might e.g. end up scanning a completely different zone?
>>
>>                          if (blockpfn + (1UL << order) <= end_pfn) {
>>
>>                                  blockpfn += (1UL << order) - 1;
>>                                  page += (1UL << order) - 1;
>>                                  nr_scanned += (1UL << order) - 1;
>>                          }
>>
>> We should better add back the MAX_ORDER sanity check?
>>
>>> ---
>>>   mm/compaction.c | 6 ------
>>>   1 file changed, 6 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index a2b16b08cbbf..baeda7132252 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -682,12 +682,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>>   	if (locked)
>>>   		spin_unlock_irqrestore(&cc->zone->lock, flags);
>>>   
>>> -	/*
>>> -	 * Be careful to not go outside of the pageblock.
>>> -	 */
>>> -	if (unlikely(blockpfn > end_pfn))
>>> -		blockpfn = end_pfn;
>>> -
>>>   	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>>>   					nr_scanned, total_isolated);
>>>   
>>
>>
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index a2b16b08cbbf..baeda7132252 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -682,12 +682,6 @@  static unsigned long isolate_freepages_block(struct compact_control *cc,
 	if (locked)
 		spin_unlock_irqrestore(&cc->zone->lock, flags);
 
-	/*
-	 * Be careful to not go outside of the pageblock.
-	 */
-	if (unlikely(blockpfn > end_pfn))
-		blockpfn = end_pfn;
-
 	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
 					nr_scanned, total_isolated);