diff mbox series

x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB

Message ID 1539059570-9043-1-git-send-email-amhetre@nvidia.com (mailing list archive)
State New, archived
Headers show
Series x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB | expand

Commit Message

Ashish Mhetre Oct. 9, 2018, 4:32 a.m. UTC
From: Shaohua Li <shli@kernel.org>

We use the accessed bit to age a page at page reclaim time,
and currently we also flush the TLB when doing so.

But in some workloads TLB flush overhead is very heavy. In my
simple multithreaded app with a lot of swap to several pcie
SSDs, removing the tlb flush gives about 20% ~ 30% swapout
speedup.

Fortunately just removing the TLB flush is a valid optimization:
on x86 CPUs, clearing the accessed bit without a TLB flush
doesn't cause data corruption.

It could cause incorrect page aging and the (mistaken) reclaim of
hot pages, but the chance of that should be relatively low.

So as a performance optimization don't flush the TLB when
clearing the accessed bit, it will eventually be flushed by
a context switch or a VM operation anyway. [ In the rare
event of it not getting flushed for a long time the delay
shouldn't really matter because there's no real memory
pressure for swapout to react to. ]

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Hugh Dickins <hughd@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: linux-mm@kvack.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20140408075809.GA1764@kernel.org
[ Rewrote the changelog and the code comments. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/pgtable.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

Comments

Peter Zijlstra Oct. 9, 2018, 7:16 a.m. UTC | #1
On Tue, Oct 09, 2018 at 10:02:50AM +0530, Ashish Mhetre wrote:
> From: Shaohua Li <shli@kernel.org>
> 
> We use the accessed bit to age a page at page reclaim time,
> and currently we also flush the TLB when doing so.
> 
> But in some workloads TLB flush overhead is very heavy. In my
> simple multithreaded app with a lot of swap to several pcie
> SSDs, removing the tlb flush gives about 20% ~ 30% swapout
> speedup.
> 
> Fortunately just removing the TLB flush is a valid optimization:
> on x86 CPUs, clearing the accessed bit without a TLB flush
> doesn't cause data corruption.
> 
> It could cause incorrect page aging and the (mistaken) reclaim of
> hot pages, but the chance of that should be relatively low.
> 
> So as a performance optimization don't flush the TLB when
> clearing the accessed bit, it will eventually be flushed by
> a context switch or a VM operation anyway. [ In the rare
> event of it not getting flushed for a long time the delay
> shouldn't really matter because there's no real memory
> pressure for swapout to react to. ]

Note that context switches (and here I'm talking about switch_mm(), not
the cheaper switch_to()) do not unconditionally imply a TLB invalidation
these days (on PCID enabled hardware).

So in that regards, the Changelog (and the comment) is a little
misleading.

I don't see anything fundamentally wrong with the patch though; just the
wording.
Nadav Amit Oct. 9, 2018, 7:20 a.m. UTC | #2
at 12:16 AM, Peter Zijlstra <peterz@infradead.org> wrote:

> On Tue, Oct 09, 2018 at 10:02:50AM +0530, Ashish Mhetre wrote:
>> From: Shaohua Li <shli@kernel.org>
>> 
>> We use the accessed bit to age a page at page reclaim time,
>> and currently we also flush the TLB when doing so.
>> 
>> But in some workloads TLB flush overhead is very heavy. In my
>> simple multithreaded app with a lot of swap to several pcie
>> SSDs, removing the tlb flush gives about 20% ~ 30% swapout
>> speedup.
>> 
>> Fortunately just removing the TLB flush is a valid optimization:
>> on x86 CPUs, clearing the accessed bit without a TLB flush
>> doesn't cause data corruption.
>> 
>> It could cause incorrect page aging and the (mistaken) reclaim of
>> hot pages, but the chance of that should be relatively low.
>> 
>> So as a performance optimization don't flush the TLB when
>> clearing the accessed bit, it will eventually be flushed by
>> a context switch or a VM operation anyway. [ In the rare
>> event of it not getting flushed for a long time the delay
>> shouldn't really matter because there's no real memory
>> pressure for swapout to react to. ]
> 
> Note that context switches (and here I'm talking about switch_mm(), not
> the cheaper switch_to()) do not unconditionally imply a TLB invalidation
> these days (on PCID enabled hardware).
> 
> So in that regards, the Changelog (and the comment) is a little
> misleading.
> 
> I don't see anything fundamentally wrong with the patch though; just the
> wording.

What am I missing? This is a patch from 2014, no? b13b1d2d8692b ?
Ashish Mhetre Oct. 9, 2018, 7:25 a.m. UTC | #3
I am really sorry for sending this patch out to unintended audience.
This patch is already present in kernel.
We were referencing this patch for internal use and by mistake the
people in review got added in CC.
I apologize for that. Please ignore this patch.

Thanks,
Ashish Mhetre


On Tuesday 09 October 2018 12:46 PM, Peter Zijlstra wrote:
> On Tue, Oct 09, 2018 at 10:02:50AM +0530, Ashish Mhetre wrote:
>> From: Shaohua Li <shli@kernel.org>
>>
>> We use the accessed bit to age a page at page reclaim time,
>> and currently we also flush the TLB when doing so.
>>
>> But in some workloads TLB flush overhead is very heavy. In my
>> simple multithreaded app with a lot of swap to several pcie
>> SSDs, removing the tlb flush gives about 20% ~ 30% swapout
>> speedup.
>>
>> Fortunately just removing the TLB flush is a valid optimization:
>> on x86 CPUs, clearing the accessed bit without a TLB flush
>> doesn't cause data corruption.
>>
>> It could cause incorrect page aging and the (mistaken) reclaim of
>> hot pages, but the chance of that should be relatively low.
>>
>> So as a performance optimization don't flush the TLB when
>> clearing the accessed bit, it will eventually be flushed by
>> a context switch or a VM operation anyway. [ In the rare
>> event of it not getting flushed for a long time the delay
>> shouldn't really matter because there's no real memory
>> pressure for swapout to react to. ]
> Note that context switches (and here I'm talking about switch_mm(), not
> the cheaper switch_to()) do not unconditionally imply a TLB invalidation
> these days (on PCID enabled hardware).
>
> So in that regards, the Changelog (and the comment) is a little
> misleading.
>
> I don't see anything fundamentally wrong with the patch though; just the
> wording.
Peter Zijlstra Oct. 9, 2018, 7:47 a.m. UTC | #4
On Tue, Oct 09, 2018 at 12:20:58AM -0700, Nadav Amit wrote:
> What am I missing? This is a patch from 2014, no? b13b1d2d8692b ?

Ha!, clearly you're more awake than me ;-)

I'll go grab more of the morning juice...
diff mbox series

Patch

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index c96314a..0004ac7 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -399,13 +399,20 @@  int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 int ptep_clear_flush_young(struct vm_area_struct *vma,
 			   unsigned long address, pte_t *ptep)
 {
-	int young;
-
-	young = ptep_test_and_clear_young(vma, address, ptep);
-	if (young)
-		flush_tlb_page(vma, address);
-
-	return young;
+	/*
+	 * On x86 CPUs, clearing the accessed bit without a TLB flush
+	 * doesn't cause data corruption. [ It could cause incorrect
+	 * page aging and the (mistaken) reclaim of hot pages, but the
+	 * chance of that should be relatively low. ]
+	 *
+	 * So as a performance optimization don't flush the TLB when
+	 * clearing the accessed bit, it will eventually be flushed by
+	 * a context switch or a VM operation anyway. [ In the rare
+	 * event of it not getting flushed for a long time the delay
+	 * shouldn't really matter because there's no real memory
+	 * pressure for swapout to react to. ]
+	 */
+	return ptep_test_and_clear_young(vma, address, ptep);
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE