diff mbox series

[4/4] mm: delay page_remove_rmap() until after the TLB has been flushed

Message ID 20221109203051.1835763-4-torvalds@linux-foundation.org (mailing list archive)
State New
Headers show
Series [1/4] mm: introduce 'encoded' page pointers with embedded extra bits | expand

Commit Message

Linus Torvalds Nov. 9, 2022, 8:30 p.m. UTC
When we remove a page table entry, we are very careful to only free the
page after we have flushed the TLB, because other CPUs could still be
using the page through stale TLB entries until after the flush.

However, we have removed the rmap entry for that page early, which means
that functions like folio_mkclean() would end up not serializing with
the page table lock because the page had already been made invisible to
rmap.

And that is a problem, because while the TLB entry exists, we could end
up with the following situation:

 (a) one CPU could come in and clean it, never seeing our mapping of the
     page

 (b) another CPU could continue to use the stale and dirty TLB entry and
     continue to write to said page

resulting in a page that has been dirtied, but then marked clean again,
all while another CPU might have dirtied it some more.

End result: possibly lost dirty data.

This extends our current TLB gather infrastructure to optionally track a
"should I do a delayed page_remove_rmap() for this page after flushing
the TLB".  It uses the newly introduced 'encoded page pointer' to do
that without having to keep separate data around.

Note, this is complicated by a couple of issues:

 - we want to delay the rmap removal, but not past the page table lock,
   because that simplifies the memcg accounting

 - only SMP configurations want to delay TLB flushing, since on UP
   there are obviously no remote TLBs to worry about, and the page
   table lock means there are no preemption issues either

 - s390 has its own mmu_gather model that doesn't delay TLB flushing,
   and as a result also does not want the delayed rmap. As such, we can
   treat S390 like the UP case and use a common fallback for the "no
   delays" case.

 - we can track an enormous number of pages in our mmu_gather structure,
   with MAX_GATHER_BATCH_COUNT batches of MAX_TABLE_BATCH pages each,
   all set up to be approximately 10k pending pages.

   We do not want to have a huge number of batched pages that we then
   need to check for delayed rmap handling inside the page table lock.

Particularly that last point results in a noteworthy detail, where the
normal page batch gathering is limited once we have delayed rmaps
pending, in such a way that only the last batch (the so-called "active
batch") in the mmu_gather structure can have any delayed entries.

NOTE! While the "possibly lost dirty data" sounds catastrophic, for this
all to happen you need to have a user thread doing either madvise() with
MADV_DONTNEED or a full re-mmap() of the area concurrently with another
thread continuing to use said mapping.

So arguably this is about user space doing crazy things, but from a VM
consistency standpoint it's better if we track the dirty bit properly
even when user space goes off the rails.

Reported-and-tested-by: Nadav Amit <nadav.amit@gmail.com>
Link: https://lore.kernel.org/all/B88D3073-440A-41C7-95F4-895D3F657EF2@gmail.com/
Cc: Will Deacon <will@kernel.org>
Cc: Aneesh Kumar <aneesh.kumar@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> # s390
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
 arch/s390/include/asm/tlb.h |  3 +++
 include/asm-generic/tlb.h   | 31 +++++++++++++++++++++++++++++--
 mm/memory.c                 | 23 +++++++++++++++++------
 mm/mmu_gather.c             | 31 +++++++++++++++++++++++++++++++
 4 files changed, 80 insertions(+), 8 deletions(-)

Comments

Linus Torvalds Nov. 9, 2022, 8:48 p.m. UTC | #1
Bah, in carefully removing all the "let's send it as a reply to the
previous thread" command line flags, I cleverly also skipped adding a
cover letter, so this updated series got sent out without one.

I need more coffee.

But hey, it's not like the people cc'd haven't seen it before, and if
you want to see *all* the patches (I didn't want to patch-bomb people
with the prep-work), at least 'b4' is happy so you can get it all with
just

   b4 am 20221109203051.1835763-1-torvalds@linux-foundation.org

this time.

The main changes to the previously posted series are

 (a) try to  move the s390 changes to generic code

 (b) build-time checking for the value range of the flags passed to
encode_page()

 (c) added comments both to code and commit messages

I'm sure I messed something up in the process, not just the lack of
cover letter which has now turned into this "tail letter" instead.

                 Linus
Linus Torvalds Nov. 9, 2022, 9:04 p.m. UTC | #2
On Wed, Nov 9, 2022 at 12:48 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> I'm sure I messed something up in the process [...]

I hate being right.

The UP build requires a

  #ifdef CONFIG_SMP
  ..
  #endif

around the tlb_flush_rmaps() implementation in mm/mmu_gather.c, since
the UP case now shares the empty "no nothing" implementation with
s390.

I'm not going to re-send the series for that trivial fix, since nobody
is likely to actually care about UP anyway, but since I noticed it
(after sending things out, sorrt), I'll just mention it here.

And I was so happy about sharing the s390 and UP case, and avoiding
any code being specific to s390. Which is what introduced this thing.

Oh well. Easy fix. Just egg on my face. Again.

                Linus
Alexander Gordeev Nov. 16, 2022, 7:47 a.m. UTC | #3
On Wed, Nov 09, 2022 at 01:04:07PM -0800, Linus Torvalds wrote:

Hi Linus,

[...]

> And I was so happy about sharing the s390 and UP case, and avoiding
> any code being specific to s390. Which is what introduced this thing.

Which actually brings a question whether CONFIG_MMU_GATHER_NO_GATHER
mode could be beneficial for UP?

But anyway, please find a follow-up series on top of mm-unstable
with patches 1,2 aimed to avoid delayed_rmap flag on s390/UP and
patches 3,4 hopefully cleaning things a bit (not so sure).

>                 Linus

Thanks!
Linus Torvalds Nov. 16, 2022, 5:39 p.m. UTC | #4
On Tue, Nov 15, 2022 at 11:48 PM Alexander Gordeev
<agordeev@linux.ibm.com> wrote:
>
> Which actually brings a question whether CONFIG_MMU_GATHER_NO_GATHER
> mode could be beneficial for UP?

No, the NO_GATHER case wouldn't work for UP in general, because once
we drop the page table lock, even on UP we end up possibly
re-scheduling due to preemption (and even without actual kernel
preemption, we have that explicit "cond_resched()" there).

And if we schedule to another thread that shares the same VM, that
other thread will continue to use the existing TLB entries.

And if those TLB entries then point to pages that were already free'd...

So the NO_GATHER case requires that you flush the TLB early even on
UP, although the requirements are a _bit_ less strict than on SMP.

And TLB flushes are not necessarily cheap, even on UP.

Now, we could possibly optimize this to the point where it *would* be
quite possible - instead of actually flushing the TLB, just set the
bit to "flush on next thread switch". So I think the UP case *could*
be made to be non-gathering.

But I don't think anybody cares about - or tests - UP enough for it to
make sense to worry about it.

           Linus
diff mbox series

Patch

diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index 05142226d65d..b91f4a9b044c 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -41,6 +41,9 @@  static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
  * Release the page cache reference for a pte removed by
  * tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page
  * has already been freed, so just do free_page_and_swap_cache.
+ *
+ * s390 doesn't delay rmap removal, so there is nothing encoded in
+ * the page pointer.
  */
 static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
 					  struct encoded_page *page,
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index e5cd34393372..154c774d6307 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -259,6 +259,28 @@  struct mmu_gather_batch {
 extern bool __tlb_remove_page_size(struct mmu_gather *tlb,
 				   struct encoded_page *page,
 				   int page_size);
+
+#ifdef CONFIG_SMP
+/*
+ * This both sets 'delayed_rmap', and returns true. It would be an inline
+ * function, except we define it before the 'struct mmu_gather'.
+ */
+#define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true)
+extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
+#endif
+
+#endif
+
+/*
+ * We have a no-op version of the rmap removal that doesn't
+ * delay anything. That is used on S390, which flushes remote
+ * TLBs synchronously, and on UP, which doesn't have any
+ * remote TLBs to flush and is not preemptible due to this
+ * all happening under the page table lock.
+ */
+#ifndef tlb_delay_rmap
+#define tlb_delay_rmap(tlb) (false)
+static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
 #endif
 
 /*
@@ -291,6 +313,11 @@  struct mmu_gather {
 	 */
 	unsigned int		freed_tables : 1;
 
+	/*
+	 * Do we have pending delayed rmap removals?
+	 */
+	unsigned int		delayed_rmap : 1;
+
 	/*
 	 * at which levels have we cleared entries?
 	 */
@@ -436,9 +463,9 @@  static inline void tlb_remove_page_size(struct mmu_gather *tlb,
 		tlb_flush_mmu(tlb);
 }
 
-static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
+static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page, unsigned int flags)
 {
-	return __tlb_remove_page_size(tlb, encode_page(page, 0), PAGE_SIZE);
+	return __tlb_remove_page_size(tlb, encode_page(page, flags), PAGE_SIZE);
 }
 
 /* tlb_remove_page
diff --git a/mm/memory.c b/mm/memory.c
index f88c351aecd4..60a0f44f6e72 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1432,6 +1432,8 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			break;
 
 		if (pte_present(ptent)) {
+			unsigned int delay_rmap;
+
 			page = vm_normal_page(vma, addr, ptent);
 			if (unlikely(!should_zap_page(details, page)))
 				continue;
@@ -1443,20 +1445,26 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			if (unlikely(!page))
 				continue;
 
+			delay_rmap = 0;
 			if (!PageAnon(page)) {
 				if (pte_dirty(ptent)) {
-					force_flush = 1;
 					set_page_dirty(page);
+					if (tlb_delay_rmap(tlb)) {
+						delay_rmap = 1;
+						force_flush = 1;
+					}
 				}
 				if (pte_young(ptent) &&
 				    likely(!(vma->vm_flags & VM_SEQ_READ)))
 					mark_page_accessed(page);
 			}
 			rss[mm_counter(page)]--;
-			page_remove_rmap(page, vma, false);
-			if (unlikely(page_mapcount(page) < 0))
-				print_bad_pte(vma, addr, ptent, page);
-			if (unlikely(__tlb_remove_page(tlb, page))) {
+			if (!delay_rmap) {
+				page_remove_rmap(page, vma, false);
+				if (unlikely(page_mapcount(page) < 0))
+					print_bad_pte(vma, addr, ptent, page);
+			}
+			if (unlikely(__tlb_remove_page(tlb, page, delay_rmap))) {
 				force_flush = 1;
 				addr += PAGE_SIZE;
 				break;
@@ -1513,8 +1521,11 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	arch_leave_lazy_mmu_mode();
 
 	/* Do the actual TLB flush before dropping ptl */
-	if (force_flush)
+	if (force_flush) {
 		tlb_flush_mmu_tlbonly(tlb);
+		if (tlb->delayed_rmap)
+			tlb_flush_rmaps(tlb, vma);
+	}
 	pte_unmap_unlock(start_pte, ptl);
 
 	/*
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index f44cc8a5b581..38592fba3826 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -9,6 +9,7 @@ 
 #include <linux/rcupdate.h>
 #include <linux/smp.h>
 #include <linux/swap.h>
+#include <linux/rmap.h>
 
 #include <asm/pgalloc.h>
 #include <asm/tlb.h>
@@ -19,6 +20,10 @@  static bool tlb_next_batch(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
 
+	/* No more batching if we have delayed rmaps pending */
+	if (tlb->delayed_rmap)
+		return false;
+
 	batch = tlb->active;
 	if (batch->next) {
 		tlb->active = batch->next;
@@ -43,6 +48,31 @@  static bool tlb_next_batch(struct mmu_gather *tlb)
 	return true;
 }
 
+/**
+ * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
+ * @tlb: the current mmu_gather
+ *
+ * Note that because of how tlb_next_batch() above works, we will
+ * never start new batches with pending delayed rmaps, so we only
+ * need to walk through the current active batch.
+ */
+void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
+{
+	struct mmu_gather_batch *batch;
+
+	batch = tlb->active;
+	for (int i = 0; i < batch->nr; i++) {
+		struct encoded_page *enc = batch->encoded_pages[i];
+
+		if (encoded_page_flags(enc)) {
+			struct page *page = encoded_page_ptr(enc);
+			page_remove_rmap(page, vma, false);
+		}
+	}
+
+	tlb->delayed_rmap = 0;
+}
+
 static void tlb_batch_pages_flush(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
@@ -286,6 +316,7 @@  static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
 	tlb->active     = &tlb->local;
 	tlb->batch_count = 0;
 #endif
+	tlb->delayed_rmap = 0;
 
 	tlb_table_init(tlb);
 #ifdef CONFIG_MMU_GATHER_PAGE_SIZE