diff mbox series

[2/8] mm/compaction: correct last_migrated_pfn update in compact_zone

Message ID 20230728171037.2219226-3-shikemeng@huaweicloud.com (mailing list archive)
State New
Headers show
Series Fixes and cleanups to compaction | expand

Commit Message

Kemeng Shi July 28, 2023, 5:10 p.m. UTC
We record start pfn of last isolated page block with last_migrated_pfn. And
then:
1. We check if we mark the page block skip for exclusive access in
isolate_migratepages_block by test if next migrate pfn is still in last
isolated page block. If so, we will set finish_pageblock to do the rescan.
2. We check if a full cc->order block is scanned by test if last scan range
passes the cc->order block boundary. If so, we flush the pages were freed.

We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
last isolated page range. However, we always align migrate_pfn to page block
or move to another page block in fast_find_migrateblock or in linearly scan
forward in isolate_migratepages before do page isolation in
isolate_migratepages_block.

Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
after scan to correctly set start pfn of last isolated page range.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Baolin Wang Aug. 1, 2023, 2:09 a.m. UTC | #1
On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> We record start pfn of last isolated page block with last_migrated_pfn. And
> then:
> 1. We check if we mark the page block skip for exclusive access in
> isolate_migratepages_block by test if next migrate pfn is still in last
> isolated page block. If so, we will set finish_pageblock to do the rescan.
> 2. We check if a full cc->order block is scanned by test if last scan range
> passes the cc->order block boundary. If so, we flush the pages were freed.
> 
> We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
> last isolated page range. However, we always align migrate_pfn to page block
> or move to another page block in fast_find_migrateblock or in linearly scan
> forward in isolate_migratepages before do page isolation in
> isolate_migratepages_block.

Right. But can you describe the impact in detail if the 
last_migrated_pfn is not set correctly? For example, this will result in 
rescan not being set correctly to miss a pageblock's rescanning.

Otherwise looks good to me.

> Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
> after scan to correctly set start pfn of last isolated page range. > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ce7841363b12..fb250c6b2b6e 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2482,7 +2482,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
>   			goto check_drain;
>   		case ISOLATE_SUCCESS:
>   			update_cached = false;
> -			last_migrated_pfn = iteration_start_pfn;
> +			last_migrated_pfn = max(cc->zone->zone_start_pfn,
> +				pageblock_start_pfn(cc->migrate_pfn - 1));
>   		}
>   
>   		err = migrate_pages(&cc->migratepages, compaction_alloc,
Kemeng Shi Aug. 1, 2023, 2:19 a.m. UTC | #2
on 8/1/2023 10:09 AM, Baolin Wang wrote:
> 
> 
> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>> We record start pfn of last isolated page block with last_migrated_pfn. And
>> then:
>> 1. We check if we mark the page block skip for exclusive access in
>> isolate_migratepages_block by test if next migrate pfn is still in last
>> isolated page block. If so, we will set finish_pageblock to do the rescan.
>> 2. We check if a full cc->order block is scanned by test if last scan range
>> passes the cc->order block boundary. If so, we flush the pages were freed.
>>
>> We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
>> last isolated page range. However, we always align migrate_pfn to page block
>> or move to another page block in fast_find_migrateblock or in linearly scan
>> forward in isolate_migratepages before do page isolation in
>> isolate_migratepages_block.
> 
> Right. But can you describe the impact in detail if the last_migrated_pfn is not set correctly? For example, this will result in rescan not being set correctly to miss a pageblock's rescanning.
> 
> Otherwise looks good to me.
> 
Sure, the impact will be added in next version. Thanks!
>> Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
>> after scan to correctly set start pfn of last isolated page range. > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>   mm/compaction.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index ce7841363b12..fb250c6b2b6e 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -2482,7 +2482,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
>>               goto check_drain;
>>           case ISOLATE_SUCCESS:
>>               update_cached = false;
>> -            last_migrated_pfn = iteration_start_pfn;
>> +            last_migrated_pfn = max(cc->zone->zone_start_pfn,
>> +                pageblock_start_pfn(cc->migrate_pfn - 1));
>>           }
>>             err = migrate_pages(&cc->migratepages, compaction_alloc,
>
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index ce7841363b12..fb250c6b2b6e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2482,7 +2482,8 @@  compact_zone(struct compact_control *cc, struct capture_control *capc)
 			goto check_drain;
 		case ISOLATE_SUCCESS:
 			update_cached = false;
-			last_migrated_pfn = iteration_start_pfn;
+			last_migrated_pfn = max(cc->zone->zone_start_pfn,
+				pageblock_start_pfn(cc->migrate_pfn - 1));
 		}
 
 		err = migrate_pages(&cc->migratepages, compaction_alloc,