diff mbox series

[RFC,1/4] mm: move tlb_table_flush to tlb_flush_mmu_free

Message ID 20180725140641.30372-2-npiggin@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm: mmu_gather changes to support explicit paging | expand

Commit Message

Nicholas Piggin July 25, 2018, 2:06 p.m. UTC
There is no need to call this from tlb_flush_mmu_tlbonly, it
logically belongs with tlb_flush_mmu_free. This allows some
code consolidation with a subsequent fix.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 mm/memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 7206a634270b..bc053d5e9d41 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -245,9 +245,6 @@  static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
 
 	tlb_flush(tlb);
 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
-#ifdef CONFIG_HAVE_RCU_TABLE_FREE
-	tlb_table_flush(tlb);
-#endif
 	__tlb_reset_range(tlb);
 }
 
@@ -255,6 +252,9 @@  static void tlb_flush_mmu_free(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
 
+#ifdef CONFIG_HAVE_RCU_TABLE_FREE
+	tlb_table_flush(tlb);
+#endif
 	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
 		free_pages_and_swap_cache(batch->pages, batch->nr);
 		batch->nr = 0;