diff mbox series

riscv: mm: synchronize MMU after pte change

Message ID CALoQrwfqpaTQ=9F7CrLHKo-fJ7oEt45g3tiFG3E5jyAr5zT2Zw@mail.gmail.com (mailing list archive)
State New, archived
Headers show
Series riscv: mm: synchronize MMU after pte change | expand

Commit Message

ShihPo Hung June 15, 2019, 6:47 a.m. UTC
Because RISC-V compliant implementations can cache invalid entries in TLB,
an SFENCE.VMA is necessary after changes to the page table.
This patch adds an SFENCE.vma for the vmalloc_fault path.

Signed-off-by: ShihPo Hung <shihpo.hung@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: linux-riscv@lists.infradead.org

---
 arch/riscv/mm/fault.c | 4 ++++
 1 file changed, 4 insertions(+)

--
2.7.4

Comments

Palmer Dabbelt June 17, 2019, 2:40 a.m. UTC | #1
On Fri, 14 Jun 2019 23:47:24 PDT (-0700), shihpo.hung@sifive.com wrote:
> Because RISC-V compliant implementations can cache invalid entries in TLB,
> an SFENCE.VMA is necessary after changes to the page table.
> This patch adds an SFENCE.vma for the vmalloc_fault path.
>
> Signed-off-by: ShihPo Hung <shihpo.hung@sifive.com>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Albert Ou <aou@eecs.berkeley.edu>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: linux-riscv@lists.infradead.org
>
> ---
>  arch/riscv/mm/fault.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index 88401d5..3d8fa95 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -29,6 +29,7 @@
>
>  #include <asm/pgalloc.h>
>  #include <asm/ptrace.h>
> +#include <asm/tlbflush.h>
>
>  /*
>   * This routine handles page faults.  It determines the address and the
> @@ -281,6 +282,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
>         pte_k = pte_offset_kernel(pmd_k, addr);
>         if (!pte_present(*pte_k))
>             goto no_context;
> +
> +       local_flush_tlb_page(addr);
> +
>         return;
>     }
>  }

This needs a comment.  The rationale is essentially the same as
update_mmu_cache().  In this case I don't think we want to directly call
update_mmu_cache(), as if we ever decide that the eager fence over there is
worse for performance (ie, on an implementation that doesn't cache invalid
entries) we could drop that fence but we'll still need this one as I don't see
anything that fixes suprious faults for the vmalloc region.

This should also CC stable.
ShihPo Hung June 17, 2019, 4:22 a.m. UTC | #2
On Mon, Jun 17, 2019 at 10:40 AM Palmer Dabbelt <palmer@sifive.com> wrote:
>
> On Fri, 14 Jun 2019 23:47:24 PDT (-0700), shihpo.hung@sifive.com wrote:
> > Because RISC-V compliant implementations can cache invalid entries in TLB,
> > an SFENCE.VMA is necessary after changes to the page table.
> > This patch adds an SFENCE.vma for the vmalloc_fault path.
> >
> > Signed-off-by: ShihPo Hung <shihpo.hung@sifive.com>
> > Cc: Palmer Dabbelt <palmer@sifive.com>
> > Cc: Albert Ou <aou@eecs.berkeley.edu>
> > Cc: Paul Walmsley <paul.walmsley@sifive.com>
> > Cc: linux-riscv@lists.infradead.org
> >
> > ---
> >  arch/riscv/mm/fault.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > index 88401d5..3d8fa95 100644
> > --- a/arch/riscv/mm/fault.c
> > +++ b/arch/riscv/mm/fault.c
> > @@ -29,6 +29,7 @@
> >
> >  #include <asm/pgalloc.h>
> >  #include <asm/ptrace.h>
> > +#include <asm/tlbflush.h>
> >
> >  /*
> >   * This routine handles page faults.  It determines the address and the
> > @@ -281,6 +282,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
> >         pte_k = pte_offset_kernel(pmd_k, addr);
> >         if (!pte_present(*pte_k))
> >             goto no_context;
> > +
> > +       local_flush_tlb_page(addr);
> > +
> >         return;
> >     }
> >  }
>
> This needs a comment.  The rationale is essentially the same as
> update_mmu_cache().  In this case I don't think we want to directly call
> update_mmu_cache(), as if we ever decide that the eager fence over there is
> worse for performance (ie, on an implementation that doesn't cache invalid
> entries) we could drop that fence but we'll still need this one as I don't see
> anything that fixes suprious faults for the vmalloc region.
>
> This should also CC stable.

Thanks, I will send v2 that adds a comment as update_mmu_cache() does
and CC stable.
diff mbox series

Patch

diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 88401d5..3d8fa95 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -29,6 +29,7 @@ 

 #include <asm/pgalloc.h>
 #include <asm/ptrace.h>
+#include <asm/tlbflush.h>

 /*
  * This routine handles page faults.  It determines the address and the
@@ -281,6 +282,9 @@  asmlinkage void do_page_fault(struct pt_regs *regs)
        pte_k = pte_offset_kernel(pmd_k, addr);
        if (!pte_present(*pte_k))
            goto no_context;
+
+       local_flush_tlb_page(addr);
+
        return;
    }
 }