mbox series

[v3,0/6] Fixes and cleanups to compaction

Message ID 20230901155141.249860-1-shikemeng@huaweicloud.com (mailing list archive)
Headers show
Series Fixes and cleanups to compaction | expand

Message

Kemeng Shi Sept. 1, 2023, 3:51 p.m. UTC
Hi all, this is another series to do fix and clean up to compaction.
Patch 1-2 fix and clean up freepage list operation.
Patch 3-4 fix and clean up isolation of freepages
Patch 7 factor code to check if compaction is needed for allocation
order.
More details can be found in respective patches. Thanks!

v2->v3:
-Collect RVB and ACK from Baolin and Mel
-Avoid blockpfn outside pageblock in original likely(order <= MAX_ORDER)
block in patch 3
-Move comment into __reset_isolation_suitable in patch 4
-Improve indentation in patch 6

v1->v2:
-Collect RVB from Baolin.
-Keep pfn inside of pageblock in patch 3.
-Only improve comment of is_via_compact_memory in patch 6.
-Squash patch 8 and patch 9 into patch 7 and use ALLOC_WMARK_MIN
instead of magic number 0.

Kemeng Shi (6):
  mm/compaction: use correct list in move_freelist_{head}/{tail}
  mm/compaction: call list_is_{first}/{last} more intuitively in
    move_freelist_{head}/{tail}
  mm/compaction: correctly return failure with bogus compound_order in
    strict mode
  mm/compaction: remove repeat compact_blockskip_flush check in
    reset_isolation_suitable
  mm/compaction: improve comment of is_via_compact_memory
  mm/compaction: factor out code to test if we should run compaction for
    target order

 mm/compaction.c | 91 ++++++++++++++++++++++++++++---------------------
 1 file changed, 52 insertions(+), 39 deletions(-)

Comments

Mel Gorman Sept. 1, 2023, 9:17 a.m. UTC | #1
On Fri, Sep 01, 2023 at 11:51:38PM +0800, Kemeng Shi wrote:
> In strict mode, we should return 0 if there is any hole in pageblock. If
> we successfully isolated pages at beginning at pageblock and then have a
> bogus compound_order outside pageblock in next page. We will abort search
> loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn,
> we will treat it as a successful isolation in strict mode as blockpfn is
> not < end_pfn and return partial isolated pages. Then
> isolate_freepages_range may success unexpectly with hole in isolated
> range.
> 
> Fixes: 9fcd6d2e052e ("mm, compaction: skip compound pages by order in free scanner")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/compaction.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index a40550a33aee..9ecbfbc695e5 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -626,11 +626,12 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>  		if (PageCompound(page)) {
>  			const unsigned int order = compound_order(page);
>  
> -			if (likely(order <= MAX_ORDER)) {
> +			if (blockpfn + (1UL << order) <= end_pfn) {
>  				blockpfn += (1UL << order) - 1;
>  				page += (1UL << order) - 1;
>  				nr_scanned += (1UL << order) - 1;
>  			}
> +
>  			goto isolate_fail;
>  		}
>  
> @@ -678,8 +679,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>  
>  	/*
> -	 * There is a tiny chance that we have read bogus compound_order(),
> -	 * so be careful to not go outside of the pageblock.
> +	 * Be careful to not go outside of the pageblock.
>  	 */
>  	if (unlikely(blockpfn > end_pfn))
>  		blockpfn = end_pfn;

Is this check still necessary after the first hunk?
Kemeng Shi Sept. 1, 2023, 9:32 a.m. UTC | #2
on 9/1/2023 5:17 PM, Mel Gorman wrote:
> On Fri, Sep 01, 2023 at 11:51:38PM +0800, Kemeng Shi wrote:
>> In strict mode, we should return 0 if there is any hole in pageblock. If
>> we successfully isolated pages at beginning at pageblock and then have a
>> bogus compound_order outside pageblock in next page. We will abort search
>> loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn,
>> we will treat it as a successful isolation in strict mode as blockpfn is
>> not < end_pfn and return partial isolated pages. Then
>> isolate_freepages_range may success unexpectly with hole in isolated
>> range.
>>
>> Fixes: 9fcd6d2e052e ("mm, compaction: skip compound pages by order in free scanner")
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>  mm/compaction.c | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index a40550a33aee..9ecbfbc695e5 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -626,11 +626,12 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>  		if (PageCompound(page)) {
>>  			const unsigned int order = compound_order(page);
>>  
>> -			if (likely(order <= MAX_ORDER)) {
>> +			if (blockpfn + (1UL << order) <= end_pfn) {
>>  				blockpfn += (1UL << order) - 1;
>>  				page += (1UL << order) - 1;
>>  				nr_scanned += (1UL << order) - 1;
>>  			}
>> +
>>  			goto isolate_fail;
>>  		}
>>  
>> @@ -678,8 +679,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
>>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
>>  
>>  	/*
>> -	 * There is a tiny chance that we have read bogus compound_order(),
>> -	 * so be careful to not go outside of the pageblock.
>> +	 * Be careful to not go outside of the pageblock.
>>  	 */
>>  	if (unlikely(blockpfn > end_pfn))
>>  		blockpfn = end_pfn;
> 
> Is this check still necessary after the first hunk?
> 
Actually, I removed this check in the first version, but Baolin thought remove this check is not
cheap and not worth it. More discussion can be found in [1]. Thanks!

[1] https://lore.kernel.org/all/a8edac8d-8e22-89cf-2c8c-217a54608d27@linux.alibaba.com/
Mel Gorman Sept. 1, 2023, 10:01 a.m. UTC | #3
On Fri, Sep 01, 2023 at 05:32:49PM +0800, Kemeng Shi wrote:
> 
> 
> on 9/1/2023 5:17 PM, Mel Gorman wrote:
> > On Fri, Sep 01, 2023 at 11:51:38PM +0800, Kemeng Shi wrote:
> >> In strict mode, we should return 0 if there is any hole in pageblock. If
> >> we successfully isolated pages at beginning at pageblock and then have a
> >> bogus compound_order outside pageblock in next page. We will abort search
> >> loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn,
> >> we will treat it as a successful isolation in strict mode as blockpfn is
> >> not < end_pfn and return partial isolated pages. Then
> >> isolate_freepages_range may success unexpectly with hole in isolated
> >> range.
> >>
> >> Fixes: 9fcd6d2e052e ("mm, compaction: skip compound pages by order in free scanner")
> >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> >> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >> ---
> >>  mm/compaction.c | 6 +++---
> >>  1 file changed, 3 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/mm/compaction.c b/mm/compaction.c
> >> index a40550a33aee..9ecbfbc695e5 100644
> >> --- a/mm/compaction.c
> >> +++ b/mm/compaction.c
> >> @@ -626,11 +626,12 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
> >>  		if (PageCompound(page)) {
> >>  			const unsigned int order = compound_order(page);
> >>  
> >> -			if (likely(order <= MAX_ORDER)) {
> >> +			if (blockpfn + (1UL << order) <= end_pfn) {
> >>  				blockpfn += (1UL << order) - 1;
> >>  				page += (1UL << order) - 1;
> >>  				nr_scanned += (1UL << order) - 1;
> >>  			}
> >> +
> >>  			goto isolate_fail;
> >>  		}
> >>  
> >> @@ -678,8 +679,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
> >>  		spin_unlock_irqrestore(&cc->zone->lock, flags);
> >>  
> >>  	/*
> >> -	 * There is a tiny chance that we have read bogus compound_order(),
> >> -	 * so be careful to not go outside of the pageblock.
> >> +	 * Be careful to not go outside of the pageblock.
> >>  	 */
> >>  	if (unlikely(blockpfn > end_pfn))
> >>  		blockpfn = end_pfn;
> > 
> > Is this check still necessary after the first hunk?
> > 
> Actually, I removed this check in the first version, but Baolin thought remove this check is not
> cheap and not worth it. More discussion can be found in [1]. Thanks!
> 

Ok, fair enough. While I think the check is redundant right now, it's a
reasonable defensive check and this is not a fast path so

Acked-by: Mel Gorman <mgorman@techsingularity.net>