diff mbox series

[RFC,3/4] mm: zap_pte_range optimise fullmm handling for dirty shared pages

Message ID 20180725155246.1085-4-npiggin@gmail.com (mailing list archive)
State New, archived
Headers show
Series possibilities for improving invalidations | expand

Commit Message

Nicholas Piggin July 25, 2018, 3:52 p.m. UTC
Shared dirty pages do not need to be flushed under page table lock
for the fullmm case, because there will be no subsequent access
through the TLBs.
---
 mm/memory.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 1161ed3f1d0b..490689909186 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1322,8 +1322,18 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 
 			if (!PageAnon(page)) {
 				if (pte_dirty(ptent)) {
-					force_flush = 1;
-					locked_flush = 1;
+					/*
+					 * Page must be flushed from TLBs
+					 * before releasing PTL to synchronize
+					 * with page_mkclean and avoid another
+					 * thread writing to the page through
+					 * the old TLB after it was marked
+					 * clean.
+					 */
+					if (!tlb->fullmm) {
+						force_flush = 1;
+						locked_flush = 1;
+					}
 					set_page_dirty(page);
 				}
 				if (pte_young(ptent) &&