diff mbox series

[17/25] mm, compaction: Keep cached migration PFNs synced for unusable pageblocks

Message ID 20190104125011.16071-18-mgorman@techsingularity.net (mailing list archive)
State New, archived
Headers show
Series Increase success rates and reduce latency of compaction v2 | expand

Commit Message

Mel Gorman Jan. 4, 2019, 12:50 p.m. UTC
Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
basis that some migrations will fail in ASYNC mode. However, if the cached
PFNs match at the start of scanning and pageblocks are skipped due to
having no isolation candidates, then the sync state does not matter.
This patch keeps matching cached PFNs in sync until a pageblock with
isolation candidates is found.

The actual benefit is marginal given that the sync scanner following the
async scanner will often skip a number of pageblocks but it's useless
work. Any benefit depends heavily on whether the scanners restarted
recently so overall the reduction in scan rates is a mere 2.8% which
is borderline noise.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/compaction.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Vlastimil Babka Jan. 17, 2019, 5:17 p.m. UTC | #1
On 1/4/19 1:50 PM, Mel Gorman wrote:
> Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
> basis that some migrations will fail in ASYNC mode. However, if the cached
> PFNs match at the start of scanning and pageblocks are skipped due to
> having no isolation candidates, then the sync state does not matter.
> This patch keeps matching cached PFNs in sync until a pageblock with
> isolation candidates is found.
> 
> The actual benefit is marginal given that the sync scanner following the
> async scanner will often skip a number of pageblocks but it's useless
> work. Any benefit depends heavily on whether the scanners restarted
> recently so overall the reduction in scan rates is a mere 2.8% which
> is borderline noise.
> 
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

My easlier suggestion to check more thoroughly if pages can be migrated (which
depends on the mode) before isolating them wouldn't play nice with this :)
Mel Gorman Jan. 17, 2019, 5:37 p.m. UTC | #2
On Thu, Jan 17, 2019 at 06:17:28PM +0100, Vlastimil Babka wrote:
> On 1/4/19 1:50 PM, Mel Gorman wrote:
> > Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
> > basis that some migrations will fail in ASYNC mode. However, if the cached
> > PFNs match at the start of scanning and pageblocks are skipped due to
> > having no isolation candidates, then the sync state does not matter.
> > This patch keeps matching cached PFNs in sync until a pageblock with
> > isolation candidates is found.
> > 
> > The actual benefit is marginal given that the sync scanner following the
> > async scanner will often skip a number of pageblocks but it's useless
> > work. Any benefit depends heavily on whether the scanners restarted
> > recently so overall the reduction in scan rates is a mere 2.8% which
> > is borderline noise.
> > 
> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> My easlier suggestion to check more thoroughly if pages can be migrated (which
> depends on the mode) before isolating them wouldn't play nice with this :)
> 

No, unfortunately it wouldn't. I did find though that sync_light often
ran very quickly after async when compaction was having trouble
succeeding. The time window was short enough that states like
Dirty/Writeback were highly unlikely to be cleared. It might have played
nice when fragmentation was very low but any benefit then would be very
difficult to detect.
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 921720f7a416..be27e4fa1b40 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1967,6 +1967,7 @@  static enum compact_result compact_zone(struct compact_control *cc)
 	unsigned long end_pfn = zone_end_pfn(cc->zone);
 	unsigned long last_migrated_pfn;
 	const bool sync = cc->mode != MIGRATE_ASYNC;
+	bool update_cached;
 	unsigned long a, b, c;
 
 	cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask);
@@ -2019,6 +2020,17 @@  static enum compact_result compact_zone(struct compact_control *cc)
 
 	last_migrated_pfn = 0;
 
+	/*
+	 * Migrate has separate cached PFNs for ASYNC and SYNC* migration on
+	 * the basis that some migrations will fail in ASYNC mode. However,
+	 * if the cached PFNs match and pageblocks are skipped due to having
+	 * no isolation candidates, then the sync state does not matter.
+	 * Until a pageblock with isolation candidates is found, keep the
+	 * cached PFNs in sync to avoid revisiting the same blocks.
+	 */
+	update_cached = !sync &&
+		cc->zone->compact_cached_migrate_pfn[0] == cc->zone->compact_cached_migrate_pfn[1];
+
 	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
 				cc->free_pfn, end_pfn, sync);
 
@@ -2050,6 +2062,11 @@  static enum compact_result compact_zone(struct compact_control *cc)
 			last_migrated_pfn = 0;
 			goto out;
 		case ISOLATE_NONE:
+			if (update_cached) {
+				cc->zone->compact_cached_migrate_pfn[1] =
+					cc->zone->compact_cached_migrate_pfn[0];
+			}
+
 			/*
 			 * We haven't isolated and migrated anything, but
 			 * there might still be unflushed migrations from
@@ -2057,6 +2074,7 @@  static enum compact_result compact_zone(struct compact_control *cc)
 			 */
 			goto check_drain;
 		case ISOLATE_SUCCESS:
+			update_cached = false;
 			last_migrated_pfn = start_pfn;
 			;
 		}