diff mbox series

[6/8] x86: Provide API for local kernel TLB flushing

Message ID 20200616074934.1600036-7-keescook@chromium.org (mailing list archive)
State New, archived
Headers show
Series seccomp: Implement constant action bitmaps | expand

Commit Message

Kees Cook June 16, 2020, 7:49 a.m. UTC
The seccomp constant action bitmap filter evaluation routine depends
on being able to quickly clear the PTE "accessed" bit for a temporary
allocation. Provide access to the existing CPU-local kernel memory TLB
flushing routines.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/tlbflush.h |  2 ++
 arch/x86/mm/tlb.c               | 12 +++++++++---
 2 files changed, 11 insertions(+), 3 deletions(-)

Comments

Andy Lutomirski June 16, 2020, 4:59 p.m. UTC | #1
On Tue, Jun 16, 2020 at 12:49 AM Kees Cook <keescook@chromium.org> wrote:
>
> The seccomp constant action bitmap filter evaluation routine depends
> on being able to quickly clear the PTE "accessed" bit for a temporary
> allocation. Provide access to the existing CPU-local kernel memory TLB
> flushing routines.

Can you write a better justification?  Also, unless I'm just
incompetent this morning, I can't find anyone calling this in the
series.
Kees Cook June 16, 2020, 6:37 p.m. UTC | #2
On Tue, Jun 16, 2020 at 09:59:29AM -0700, Andy Lutomirski wrote:
> On Tue, Jun 16, 2020 at 12:49 AM Kees Cook <keescook@chromium.org> wrote:
> >
> > The seccomp constant action bitmap filter evaluation routine depends
> > on being able to quickly clear the PTE "accessed" bit for a temporary
> > allocation. Provide access to the existing CPU-local kernel memory TLB
> > flushing routines.
> 
> Can you write a better justification?  Also, unless I'm just

Er, dunno? That's the entire reason this series needs it.

> incompetent this morning, I can't find anyone calling this in the
> series.

It's in patch 4, seccomp_update_bitmap():
https://lore.kernel.org/lkml/20200616074934.1600036-5-keescook@chromium.org/
diff mbox series

Patch

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 8c87a2e0b660..ae853e77d6bc 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -228,6 +228,8 @@  extern void flush_tlb_all(void);
 extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 				unsigned long end, unsigned int stride_shift,
 				bool freed_tables);
+extern void local_flush_tlb_kernel_range(unsigned long start,
+					 unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
 static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 1a3569b43aa5..ffcf2bd0ce1c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -959,16 +959,22 @@  void flush_tlb_all(void)
 	on_each_cpu(do_flush_tlb_all, NULL, 1);
 }
 
-static void do_kernel_range_flush(void *info)
+void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
-	struct flush_tlb_info *f = info;
 	unsigned long addr;
 
 	/* flush range by one by one 'invlpg' */
-	for (addr = f->start; addr < f->end; addr += PAGE_SIZE)
+	for (addr = start; addr < end; addr += PAGE_SIZE)
 		flush_tlb_one_kernel(addr);
 }
 
+static void do_kernel_range_flush(void *info)
+{
+	struct flush_tlb_info *f = info;
+
+	local_flush_tlb_kernel_range(f->start, f->end);
+}
+
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
 	/* Balance as user space task's flush, a bit conservative */