Message ID | 20191217071713.93399-1-aneesh.kumar@linux.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC,1/2] mm/mmu_gather: Invalidate TLB correctly on batch allocation failure and flush | expand |
On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote: > Architectures for which we have hardware walkers of Linux page table should > flush TLB on mmu gather batch allocation failures and batch flush. Some > architectures like POWER supports multiple translation modes (hash and radix) > and in the case of POWER only radix translation mode needs the above TLBI. > This is because for hash translation mode kernel wants to avoid this extra > flush since there are no hardware walkers of linux page table. With radix > translation, the hardware also walks linux page table and with that, kernel > needs to make sure to TLB invalidate page walk cache before page table pages are > freed. > Based on changes from Peter Zijlstra <peterz@infradead.org> AFAICT it is all my patch ;-) Anyway, this commit: > More details in > commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") states that you do an explicit invalidate in __p*_free_tlb(), which, if I'm not mistaken is still there: arch/powerpc/include/asm/nohash/pgalloc.h: tlb_flush_pgtable(tlb, address); Or am I reading this wrong? I'm thinking you can remove that now. > diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h > index b2c0be93929d..feea1a09bbce 100644 > --- a/arch/powerpc/include/asm/tlb.h > +++ b/arch/powerpc/include/asm/tlb.h > @@ -27,6 +27,10 @@ > #define tlb_flush tlb_flush > extern void tlb_flush(struct mmu_gather *tlb); > > +#ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * PPC-Hash does not use the linux page-tables, so we can avoid * the TLBI for page-table freeing, PPC-Radix otoh does use the * page-tables and needs the TLBI. */ > +#define tlb_needs_table_invalidate() radix_enabled() > +#endif Also, are you really sure about the !SMP case? Esp. on Radix I'm thinking that the PWC (page-walk-cache) can give trouble even on UP, when we get preempted in the middle of mmu_gather. Hmm? > /* Get the generic bits... */ > #include <asm-generic/tlb.h>
On 12/17/19 2:39 PM, Peter Zijlstra wrote: > On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote: >> Architectures for which we have hardware walkers of Linux page table should >> flush TLB on mmu gather batch allocation failures and batch flush. Some >> architectures like POWER supports multiple translation modes (hash and radix) >> and in the case of POWER only radix translation mode needs the above TLBI. >> This is because for hash translation mode kernel wants to avoid this extra >> flush since there are no hardware walkers of linux page table. With radix >> translation, the hardware also walks linux page table and with that, kernel >> needs to make sure to TLB invalidate page walk cache before page table pages are >> freed. > >> Based on changes from Peter Zijlstra <peterz@infradead.org> > > AFAICT it is all my patch ;-) Yes. I moved the changes you had to upstream. I can update the From: in the next version if you are ok with that? > > Anyway, this commit: > >> More details in >> commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") > > states that you do an explicit invalidate in __p*_free_tlb(), which, if > I'm not mistaken is still there: > > arch/powerpc/include/asm/nohash/pgalloc.h: tlb_flush_pgtable(tlb, address); > nohash is not really radix. So we still do the tlb flush from the pte_free_tlb for nohash and for PPC-radix, we let tlb_table_invalidate to flush that. > Or am I reading this wrong? I'm thinking you can remove that now. > >> diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h >> index b2c0be93929d..feea1a09bbce 100644 >> --- a/arch/powerpc/include/asm/tlb.h >> +++ b/arch/powerpc/include/asm/tlb.h >> @@ -27,6 +27,10 @@ >> #define tlb_flush tlb_flush >> extern void tlb_flush(struct mmu_gather *tlb); >> >> +#ifdef CONFIG_HAVE_RCU_TABLE_FREE > /* > * PPC-Hash does not use the linux page-tables, so we can avoid > * the TLBI for page-table freeing, PPC-Radix otoh does use the > * page-tables and needs the TLBI. > */ >> +#define tlb_needs_table_invalidate() radix_enabled() >> +#endif > > Also, are you really sure about the !SMP case? Esp. on Radix I'm > thinking that the PWC (page-walk-cache) can give trouble even on UP, > when we get preempted in the middle of mmu_gather. Hmm? > Yes, looking at !SMP I guess we do have issue there. we do free the pagetable pages directly in __p*_free_tlb() with the current code. That will definitely not work. Are you suggesting we enable HAVE_RCU_TABLE_FREE even for !SMP? >> /* Get the generic bits... */ >> #include <asm-generic/tlb.h> > > -aneesh
On Tue, Dec 17, 2019 at 04:18:40PM +0530, Aneesh Kumar K.V wrote: > On 12/17/19 2:39 PM, Peter Zijlstra wrote: > > On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote: > > > Architectures for which we have hardware walkers of Linux page table should > > > flush TLB on mmu gather batch allocation failures and batch flush. Some > > > architectures like POWER supports multiple translation modes (hash and radix) > > > and in the case of POWER only radix translation mode needs the above TLBI. > > > This is because for hash translation mode kernel wants to avoid this extra > > > flush since there are no hardware walkers of linux page table. With radix > > > translation, the hardware also walks linux page table and with that, kernel > > > needs to make sure to TLB invalidate page walk cache before page table pages are > > > freed. > > > > > Based on changes from Peter Zijlstra <peterz@infradead.org> > > > > AFAICT it is all my patch ;-) > > Yes. I moved the changes you had to upstream. I can update the From: in the > next version if you are ok with that? Well, since PPC isn't broken per finding the invalidate in __p*_free_tlb(), lets do these things on top of the patches I proposed here. Also, you mnight want to run benchmarks to see if the movement of that TLBI actually helps (I'm thinking the cost of the PTESYNC might add up).
Peter Zijlstra <peterz@infradead.org> writes: > On Tue, Dec 17, 2019 at 04:18:40PM +0530, Aneesh Kumar K.V wrote: >> On 12/17/19 2:39 PM, Peter Zijlstra wrote: >> > On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote: >> > > Architectures for which we have hardware walkers of Linux page table should >> > > flush TLB on mmu gather batch allocation failures and batch flush. Some >> > > architectures like POWER supports multiple translation modes (hash and radix) >> > > and in the case of POWER only radix translation mode needs the above TLBI. >> > > This is because for hash translation mode kernel wants to avoid this extra >> > > flush since there are no hardware walkers of linux page table. With radix >> > > translation, the hardware also walks linux page table and with that, kernel >> > > needs to make sure to TLB invalidate page walk cache before page table pages are >> > > freed. >> > >> > > Based on changes from Peter Zijlstra <peterz@infradead.org> >> > >> > AFAICT it is all my patch ;-) >> >> Yes. I moved the changes you had to upstream. I can update the From: in the >> next version if you are ok with that? > > Well, since PPC isn't broken per finding the invalidate in > __p*_free_tlb(), lets do these things on top of the patches I proposed > here. Also, you mnight want to run benchmarks to see if the movement of > that TLBI actually helps (I'm thinking the cost of the PTESYNC might add > up). Upstream ppc64 is broken after the commit: a46cc7a90fd8 ("powerpc/mm/radix: Improve TLB/PWC flushes"). Also the patches are not adding any extra TLBI on either radix or hash. Considering we need to backport this to stable and other distributions, how about we do this early patches in your series before the Kconfig rename? This should enable stable to pick them up with less dependencies. -aneesh
On Wed, Dec 18, 2019 at 10:52:53AM +0530, Aneesh Kumar K.V wrote: > Upstream ppc64 is broken after the commit: a46cc7a90fd8 > ("powerpc/mm/radix: Improve TLB/PWC flushes"). > > Also the patches are not adding any extra TLBI on either radix or hash. > > Considering we need to backport this to stable and other distributions, > how about we do this early patches in your series before the Kconfig rename? > This should enable stable to pick them up with less dependencies. OK I suppose. Will you send a new series?
diff --git a/arch/Kconfig b/arch/Kconfig index 48b5e103bdb0..208aad121630 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -396,9 +396,6 @@ config HAVE_ARCH_JUMP_LABEL_RELATIVE config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_NO_INVALIDATE - bool - config HAVE_MMU_GATHER_PAGE_SIZE bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1ec34e16ed65..a15f5584b0de 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -223,7 +223,6 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index b2c0be93929d..feea1a09bbce 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -27,6 +27,10 @@ #define tlb_flush tlb_flush extern void tlb_flush(struct mmu_gather *tlb); +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() radix_enabled() +#endif + /* Get the generic bits... */ #include <asm-generic/tlb.h> diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index eb24cb1afc11..18e9fb6fcf1b 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -65,7 +65,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index a2f3fa61ee36..8cb8f3833239 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -28,6 +28,15 @@ void flush_tlb_pending(void); #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define tlb_flush(tlb) flush_tlb_pending() +/* + * SPARC64's hardware TLB fill does not use the Linux page-tables + * and therefore we don't need a TLBI when freeing page-table pages. + */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() (false) +#endif + #include <asm-generic/tlb.h> #endif /* _SPARC64_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2b10036fefd0..dcdf13fc0a0b 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -137,13 +137,6 @@ * When used, an architecture is expected to provide __tlb_remove_table() * which does the actual freeing of these pages. * - * HAVE_RCU_TABLE_NO_INVALIDATE - * - * This makes HAVE_RCU_TABLE_FREE avoid calling tlb_flush_mmu_tlbonly() before - * freeing the page-table pages. This can be avoided if you use - * HAVE_RCU_TABLE_FREE and your architecture does _NOT_ use the Linux - * page-tables natively. - * * MMU_GATHER_NO_RANGE * * Use this if your architecture lacks an efficient flush_tlb_range(). @@ -189,8 +182,23 @@ struct mmu_table_batch { extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +/* + * This allows an architecture that does not use the linux page-tables for + * hardware to skip the TLBI when freeing page tables. + */ +#ifndef tlb_needs_table_invalidate +#define tlb_needs_table_invalidate() (true) +#endif + +#else + +#ifdef tlb_needs_table_invalidate +#error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE #endif +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + + #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7d70e5c78f97..7c1b8f67af7b 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -102,14 +102,14 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } } static void tlb_remove_table_smp_sync(void *arg)
Architectures for which we have hardware walkers of Linux page table should flush TLB on mmu gather batch allocation failures and batch flush. Some architectures like POWER supports multiple translation modes (hash and radix) and in the case of POWER only radix translation mode needs the above TLBI. This is because for hash translation mode kernel wants to avoid this extra flush since there are no hardware walkers of linux page table. With radix translation, the hardware also walks linux page table and with that, kernel needs to make sure to TLB invalidate page walk cache before page table pages are freed. More details in commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") Based on changes from Peter Zijlstra <peterz@infradead.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- arch/Kconfig | 3 --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/tlb.h | 4 ++++ arch/sparc/Kconfig | 1 - arch/sparc/include/asm/tlb_64.h | 9 +++++++++ include/asm-generic/tlb.h | 22 +++++++++++++++------- mm/mmu_gather.c | 16 ++++++++-------- 7 files changed, 36 insertions(+), 20 deletions(-)