diff mbox series

[v2] x86/mm/tlb: ignore f->new_tlb_gen when zero

Message ID 20220710232837.3618-1-namit@vmware.com (mailing list archive)
State New
Headers show
Series [v2] x86/mm/tlb: ignore f->new_tlb_gen when zero | expand

Commit Message

Nadav Amit July 10, 2022, 11:28 p.m. UTC
From: Nadav Amit <namit@vmware.com>

Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
possible") introduced an optimization of skipping the flush if the TLB
generation that is flushed (as provided in flush_tlb_info) was already
flushed.

However, arch_tlbbatch_flush() does not provide any generation in
flush_tlb_info. As a result, try_to_unmap_one() would not perform any
TLB flushes.

Fix it by checking whether f->new_tlb_gen is nonzero. Zero value is
anyhow is an invalid generation value. To avoid future confusions,
introduce TLB_GENERATION_INVALID constant and use it properly. Add some
assertions to check no partial flushes are done with
TLB_GENERATION_INVALID or when f->mm is NULL, since this does not make
any sense.

In addition, add the missing unlikely().

Fixes: aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when possible")
Reported-by: Hugh Dickins <hughd@google.com>
Tested-by: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>

---

v1 -> v2:
* Introduce TLB_GENERATION_INVALID to clarify intent.
* Leave the early return and do not "goto out".
* Add some assertions to check and document in code the relationship
  between TLB_GENERATION_INVALID and TLB_FLUSH_ALL.
---
 arch/x86/include/asm/tlbflush.h |  1 +
 arch/x86/mm/tlb.c               | 15 ++++++++++++---
 2 files changed, 13 insertions(+), 3 deletions(-)

Comments

Nadav Amit July 11, 2022, 5:39 p.m. UTC | #1
On Jul 10, 2022, at 4:28 PM, Nadav Amit <nadav.amit@gmail.com> wrote:

> From: Nadav Amit <namit@vmware.com>
> 
> Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
> possible") introduced an optimization of skipping the flush if the TLB
> generation that is flushed (as provided in flush_tlb_info) was already
> flushed.

Dave,

Can you please review this patch today?

I feel bad (for a good reason) for breaking swap/migration.

Thanks,
Nadav
Nadav Amit July 13, 2022, 12:33 a.m. UTC | #2
On Jul 11, 2022, at 10:39 AM, Nadav Amit <namit@vmware.com> wrote:

> On Jul 10, 2022, at 4:28 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> 
>> From: Nadav Amit <namit@vmware.com>
>> 
>> Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
>> possible") introduced an optimization of skipping the flush if the TLB
>> generation that is flushed (as provided in flush_tlb_info) was already
>> flushed.
> 
> Dave,
> 
> Can you please review this patch today?
> 
> I feel bad (for a good reason) for breaking swap/migration.
> 
> Thanks,
> Nadav

Ping?

As you know, this really must go into 5.19 or otherwise aa44284960d5
reverted.
Hugh Dickins July 13, 2022, 12:49 a.m. UTC | #3
On Wed, 13 Jul 2022, Nadav Amit wrote:
> On Jul 11, 2022, at 10:39 AM, Nadav Amit <namit@vmware.com> wrote:
> > On Jul 10, 2022, at 4:28 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> >> From: Nadav Amit <namit@vmware.com>
> >> 
> >> Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
> >> possible") introduced an optimization of skipping the flush if the TLB
> >> generation that is flushed (as provided in flush_tlb_info) was already
> >> flushed.
> > 
> > Dave,
> > 
> > Can you please review this patch today?
> > 
> > I feel bad (for a good reason) for breaking swap/migration.
> > 
> > Thanks,
> > Nadav
> 
> Ping?
> 
> As you know, this really must go into 5.19 or otherwise aa44284960d5
> reverted.

No, aa44284960d5 is not in 5.19-rc: it's in linux-next heading for 5.20.

Hugh
Nadav Amit July 13, 2022, 12:50 a.m. UTC | #4
On Jul 12, 2022, at 5:49 PM, Hugh Dickins <hughd@google.com> wrote:

> ⚠ External Email
> 
> On Wed, 13 Jul 2022, Nadav Amit wrote:
>> On Jul 11, 2022, at 10:39 AM, Nadav Amit <namit@vmware.com> wrote:
>>> On Jul 10, 2022, at 4:28 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
>>>> From: Nadav Amit <namit@vmware.com>
>>>> 
>>>> Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
>>>> possible") introduced an optimization of skipping the flush if the TLB
>>>> generation that is flushed (as provided in flush_tlb_info) was already
>>>> flushed.
>>> 
>>> Dave,
>>> 
>>> Can you please review this patch today?
>>> 
>>> I feel bad (for a good reason) for breaking swap/migration.
>>> 
>>> Thanks,
>>> Nadav
>> 
>> Ping?
>> 
>> As you know, this really must go into 5.19 or otherwise aa44284960d5
>> reverted.
> 
> No, aa44284960d5 is not in 5.19-rc: it's in linux-next heading for 5.20.

Oh.. My bad. Thanks for clarifying Hugh.
Dave Hansen July 19, 2022, 4:13 p.m. UTC | #5
On 7/10/22 16:28, Nadav Amit wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when
> possible") introduced an optimization of skipping the flush if the TLB
> generation that is flushed (as provided in flush_tlb_info) was already
> flushed.
> 
> However, arch_tlbbatch_flush() does not provide any generation in
> flush_tlb_info. As a result, try_to_unmap_one() would not perform any
> TLB flushes.
> 
> Fix it by checking whether f->new_tlb_gen is nonzero. Zero value is
> anyhow is an invalid generation value. To avoid future confusions,
> introduce TLB_GENERATION_INVALID constant and use it properly. Add some
> assertions to check no partial flushes are done with
> TLB_GENERATION_INVALID or when f->mm is NULL, since this does not make
> any sense.
> 
> In addition, add the missing unlikely().

I've applied this:

> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=x86/mm&id=8f1d56f64f8d6b80dea2d1978d10071132a695c5

Please double-check that my rewording of the commit message looks good
to you.  I also replaced the VM_BUG_ON()'s with warnings.  Screwing up
TLB flushing isn't great, but it's also not worth killing the system.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 4af5579c7ef7..cda3118f3b27 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -16,6 +16,7 @@ 
 void __flush_tlb_all(void);
 
 #define TLB_FLUSH_ALL	-1UL
+#define TLB_GENERATION_INVALID	0
 
 void cr4_update_irqsoff(unsigned long set, unsigned long clear);
 unsigned long cr4_read_shadow(void);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d9314cc8b81f..0f346c51dd99 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -771,7 +771,8 @@  static void flush_tlb_func(void *info)
 		return;
 	}
 
-	if (f->new_tlb_gen <= local_tlb_gen) {
+	if (unlikely(f->new_tlb_gen != TLB_GENERATION_INVALID &&
+		     f->new_tlb_gen <= local_tlb_gen)) {
 		/*
 		 * The TLB is already up to date in respect to f->new_tlb_gen.
 		 * While the core might be still behind mm_tlb_gen, checking
@@ -843,6 +844,12 @@  static void flush_tlb_func(void *info)
 		/* Partial flush */
 		unsigned long addr = f->start;
 
+		/* Partial flush cannot have invalid generations */
+		VM_BUG_ON(f->new_tlb_gen == TLB_GENERATION_INVALID);
+
+		/* Partial flush must have valid mm */
+		VM_BUG_ON(f->mm == NULL);
+
 		nr_invalidate = (f->end - f->start) >> f->stride_shift;
 
 		while (addr < f->end) {
@@ -1045,7 +1052,8 @@  void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 		struct flush_tlb_info *info;
 
 		preempt_disable();
-		info = get_flush_tlb_info(NULL, start, end, 0, false, 0);
+		info = get_flush_tlb_info(NULL, start, end, 0, false,
+					  TLB_GENERATION_INVALID);
 
 		on_each_cpu(do_kernel_range_flush, info, 1);
 
@@ -1214,7 +1222,8 @@  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 
 	int cpu = get_cpu();
 
-	info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, 0);
+	info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false,
+				  TLB_GENERATION_INVALID);
 	/*
 	 * flush_tlb_multi() is not optimized for the common case in which only
 	 * a local TLB flush is needed. Optimize this use-case by calling