diff mbox series

[v9,4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush()

Message ID 20240417071847.29584-5-byungchul@sk.com (mailing list archive)
State New
Headers show
Series Reduce tlb and interrupt numbers over 90% by improving folio migration | expand

Commit Message

Byungchul Park April 17, 2024, 7:18 a.m. UTC
This is a preparation for migrc mechanism that requires to avoid
redundant tlb flushes by manipulating tlb batch's arch data.  To achieve
that, it's needed to separate the part clearing the tlb batch's arch
data out of arch_tlbbatch_flush().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/x86/mm/tlb.c | 2 --
 mm/rmap.c         | 1 +
 2 files changed, 1 insertion(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		local_irq_enable();
 	}
 
-	cpumask_clear(&batch->cpumask);
-
 	put_flush_tlb_info();
 	put_cpu();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index 2542bfe1a947..d8671d0dc416 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -673,6 +673,7 @@  void try_to_unmap_flush(void)
 		return;
 
 	arch_tlbbatch_flush(&tlb_ubc->arch);
+	arch_tlbbatch_clear(&tlb_ubc->arch);
 	tlb_ubc->flush_required = false;
 	tlb_ubc->writable = false;
 }