diff mbox series

riscv: pgtable: Enhance set_pte to prevent OoO risk

Message ID 20231208151036.2458921-1-guoren@kernel.org (mailing list archive)
State Rejected, archived
Headers show
Series riscv: pgtable: Enhance set_pte to prevent OoO risk | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR fail PR summary
conchuod/patch-1-test-1 success .github/scripts/patches/build_rv32_defconfig.sh
conchuod/patch-1-test-2 success .github/scripts/patches/build_rv64_clang_allmodconfig.sh
conchuod/patch-1-test-3 success .github/scripts/patches/build_rv64_gcc_allmodconfig.sh
conchuod/patch-1-test-4 success .github/scripts/patches/build_rv64_nommu_k210_defconfig.sh
conchuod/patch-1-test-5 success .github/scripts/patches/build_rv64_nommu_virt_defconfig.sh
conchuod/patch-1-test-6 fail .github/scripts/patches/checkpatch.sh
conchuod/patch-1-test-7 success .github/scripts/patches/dtb_warn_rv64.sh
conchuod/patch-1-test-8 success .github/scripts/patches/header_inline.sh
conchuod/patch-1-test-9 success .github/scripts/patches/kdoc.sh
conchuod/patch-1-test-10 success .github/scripts/patches/module_param.sh
conchuod/patch-1-test-11 success .github/scripts/patches/verify_fixes.sh
conchuod/patch-1-test-12 success .github/scripts/patches/verify_signedoff.sh

Commit Message

Guo Ren Dec. 8, 2023, 3:10 p.m. UTC
From: Guo Ren <guoren@linux.alibaba.com>

When changing from an invalid pte to a valid one for a kernel page,
there is no need for tlb_flush. It's okay for the TSO memory model, but
there is an OoO risk for the Weak one. eg:

sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
...
ld t1, (a1) // a1 = va of above pte

If the ld instruction is executed speculatively before the sd
instruction. Then it would bring an invalid entry into the TLB, and when
the ld instruction retired, a spurious page fault occurred. Because the
vmemmap has been ignored by vmalloc_fault, the spurious page fault would
cause kernel panic.

This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
used for page table modifications"). For RISC-V, there is no requirement
in the spec to guarantee all tlb entries are valid and no requirement to
PTW filter out invalid entries. Of course, micro-arch could give a more
robust design, but here, use a software fence to guarantee.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
---
 arch/riscv/include/asm/pgtable.h | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Guo Ren Dec. 8, 2023, 3:17 p.m. UTC | #1
On Fri, Dec 8, 2023 at 11:10 PM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> When changing from an invalid pte to a valid one for a kernel page,
> there is no need for tlb_flush. It's okay for the TSO memory model, but
> there is an OoO risk for the Weak one. eg:
Sorry, TSO has no Immunity. The above sentence should be rewritten with:

When changing from an invalid pte to a valid one for a kernel page,
there is no guarantee tlb_flush in Linux. So, there is an OoO risk for
riscv, e.g.:

>
> sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> ...
> ld t1, (a1) // a1 = va of above pte
>
> If the ld instruction is executed speculatively before the sd
> instruction. Then it would bring an invalid entry into the TLB, and when
> the ld instruction retired, a spurious page fault occurred. Because the
> vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> cause kernel panic.
>
> This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> used for page table modifications"). For RISC-V, there is no requirement
> in the spec to guarantee all tlb entries are valid and no requirement to
> PTW filter out invalid entries. Of course, micro-arch could give a more
> robust design, but here, use a software fence to guarantee.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/include/asm/pgtable.h | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 294044429e8e..2fae5a5438e0 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
>  static inline void set_pte(pte_t *ptep, pte_t pteval)
>  {
>         *ptep = pteval;
> +
> +       /*
> +        * Only if the new pte is present and kernel, otherwise TLB
> +        * maintenance or update_mmu_cache() have the necessary barriers.
> +        */
> +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> +               RISCV_FENCE(rw,rw);
>  }
>
>  void flush_icache_pte(pte_t pte);
> --
> 2.40.1
>
Alexandre Ghiti Dec. 11, 2023, 5:52 a.m. UTC | #2
Hi Guo,

On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> When changing from an invalid pte to a valid one for a kernel page,
> there is no need for tlb_flush. It's okay for the TSO memory model, but
> there is an OoO risk for the Weak one. eg:
>
> sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> ...
> ld t1, (a1) // a1 = va of above pte
>
> If the ld instruction is executed speculatively before the sd
> instruction. Then it would bring an invalid entry into the TLB, and when
> the ld instruction retired, a spurious page fault occurred. Because the
> vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> cause kernel panic.
>
> This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> used for page table modifications"). For RISC-V, there is no requirement
> in the spec to guarantee all tlb entries are valid and no requirement to
> PTW filter out invalid entries. Of course, micro-arch could give a more
> robust design, but here, use a software fence to guarantee.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> ---
>  arch/riscv/include/asm/pgtable.h | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 294044429e8e..2fae5a5438e0 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
>  static inline void set_pte(pte_t *ptep, pte_t pteval)
>  {
>         *ptep = pteval;
> +
> +       /*
> +        * Only if the new pte is present and kernel, otherwise TLB
> +        * maintenance or update_mmu_cache() have the necessary barriers.
> +        */
> +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> +               RISCV_FENCE(rw,rw);

Only a sfence.vma can guarantee that the PTW actually sees a new
mapping, a fence is not enough. That being said, new kernel mappings
(vmalloc ones) are correctly handled in the kernel by using
flush_cache_vmap(). Did you observe something that this patch fixes?

Thanks,

Alex

>  }
>
>  void flush_icache_pte(pte_t pte);
> --
> 2.40.1
>
Guo Ren Dec. 11, 2023, 8:41 a.m. UTC | #3
On Mon, Dec 11, 2023 at 1:52 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
>
> Hi Guo,
>
> On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > When changing from an invalid pte to a valid one for a kernel page,
> > there is no need for tlb_flush. It's okay for the TSO memory model, but
> > there is an OoO risk for the Weak one. eg:
> >
> > sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> > ...
> > ld t1, (a1) // a1 = va of above pte
> >
> > If the ld instruction is executed speculatively before the sd
> > instruction. Then it would bring an invalid entry into the TLB, and when
> > the ld instruction retired, a spurious page fault occurred. Because the
> > vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> > cause kernel panic.
> >
> > This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> > used for page table modifications"). For RISC-V, there is no requirement
> > in the spec to guarantee all tlb entries are valid and no requirement to
> > PTW filter out invalid entries. Of course, micro-arch could give a more
> > robust design, but here, use a software fence to guarantee.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > ---
> >  arch/riscv/include/asm/pgtable.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> > index 294044429e8e..2fae5a5438e0 100644
> > --- a/arch/riscv/include/asm/pgtable.h
> > +++ b/arch/riscv/include/asm/pgtable.h
> > @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
> >  static inline void set_pte(pte_t *ptep, pte_t pteval)
> >  {
> >         *ptep = pteval;
> > +
> > +       /*
> > +        * Only if the new pte is present and kernel, otherwise TLB
> > +        * maintenance or update_mmu_cache() have the necessary barriers.
> > +        */
> > +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> > +               RISCV_FENCE(rw,rw);
>
> Only a sfence.vma can guarantee that the PTW actually sees a new
> mapping, a fence is not enough. That being said, new kernel mappings
> (vmalloc ones) are correctly handled in the kernel by using
> flush_cache_vmap(). Did you observe something that this patch fixes?
Thx for the reply!

The sfence.vma is too expensive, so the situation is tricky. See the
arm64 commit: 7f0b1bf04511 ("arm64: Fix barriers used for page table
modifications"), which is similar. That is, linux assumes invalid pte
won't get into TLB. Think about memory hotplug:

mm/sparse.c: sparse_add_section() {
...
        memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap);
        if (IS_ERR(memmap))
                return PTR_ERR(memmap);

        /*
         * Poison uninitialized struct pages in order to catch invalid flags
         * combinations.
         */
        page_init_poison(memmap, sizeof(struct page) * nr_pages);
...
}
The section_activate would use set_pte to setup vmemmap, and
page_init_poison would access these pages' struct.

That means:
sd t0, (a0) // a0 = struct page's pte address, pteval is changed from
invalid to valid
 ...
lw/sw t1, (a1) // a1 = va of struct page

If the lw/sw instruction is executed speculatively before the set_pte,
we need a fence to prevent this.

>
> Thanks,
>
> Alex
>
> >  }
> >
> >  void flush_icache_pte(pte_t pte);
> > --
> > 2.40.1
> >
Alexandre Ghiti Dec. 11, 2023, 9:04 a.m. UTC | #4
On Mon, Dec 11, 2023 at 9:41 AM Guo Ren <guoren@kernel.org> wrote:
>
> On Mon, Dec 11, 2023 at 1:52 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
> >
> > Hi Guo,
> >
> > On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
> > >
> > > From: Guo Ren <guoren@linux.alibaba.com>
> > >
> > > When changing from an invalid pte to a valid one for a kernel page,
> > > there is no need for tlb_flush. It's okay for the TSO memory model, but
> > > there is an OoO risk for the Weak one. eg:
> > >
> > > sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> > > ...
> > > ld t1, (a1) // a1 = va of above pte
> > >
> > > If the ld instruction is executed speculatively before the sd
> > > instruction. Then it would bring an invalid entry into the TLB, and when
> > > the ld instruction retired, a spurious page fault occurred. Because the
> > > vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> > > cause kernel panic.
> > >
> > > This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> > > used for page table modifications"). For RISC-V, there is no requirement
> > > in the spec to guarantee all tlb entries are valid and no requirement to
> > > PTW filter out invalid entries. Of course, micro-arch could give a more
> > > robust design, but here, use a software fence to guarantee.
> > >
> > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > ---
> > >  arch/riscv/include/asm/pgtable.h | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> > > index 294044429e8e..2fae5a5438e0 100644
> > > --- a/arch/riscv/include/asm/pgtable.h
> > > +++ b/arch/riscv/include/asm/pgtable.h
> > > @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
> > >  static inline void set_pte(pte_t *ptep, pte_t pteval)
> > >  {
> > >         *ptep = pteval;
> > > +
> > > +       /*
> > > +        * Only if the new pte is present and kernel, otherwise TLB
> > > +        * maintenance or update_mmu_cache() have the necessary barriers.
> > > +        */
> > > +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> > > +               RISCV_FENCE(rw,rw);
> >
> > Only a sfence.vma can guarantee that the PTW actually sees a new
> > mapping, a fence is not enough. That being said, new kernel mappings
> > (vmalloc ones) are correctly handled in the kernel by using
> > flush_cache_vmap(). Did you observe something that this patch fixes?
> Thx for the reply!
>
> The sfence.vma is too expensive, so the situation is tricky. See the
> arm64 commit: 7f0b1bf04511 ("arm64: Fix barriers used for page table
> modifications"), which is similar. That is, linux assumes invalid pte
> won't get into TLB. Think about memory hotplug:
>
> mm/sparse.c: sparse_add_section() {
> ...
>         memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap);
>         if (IS_ERR(memmap))
>                 return PTR_ERR(memmap);
>
>         /*
>          * Poison uninitialized struct pages in order to catch invalid flags
>          * combinations.
>          */
>         page_init_poison(memmap, sizeof(struct page) * nr_pages);
> ...
> }
> The section_activate would use set_pte to setup vmemmap, and
> page_init_poison would access these pages' struct.

So I think the generic code must be fixed by adding a
flush_cache_vmap() in vmemmap_populate_range() or similar: several
architectures implement flush_cache_vmap() because they need to do
"something" after a new mapping is established, so vmemmap should not
be any different.

>
> That means:
> sd t0, (a0) // a0 = struct page's pte address, pteval is changed from
> invalid to valid
>  ...
> lw/sw t1, (a1) // a1 = va of struct page
>
> If the lw/sw instruction is executed speculatively before the set_pte,
> we need a fence to prevent this.

Yes I agree, but to me we need the fence property of sfence.vma to
make sure the PTW sees the new pte, unless I'm mistaken and something
in the privileged specification states that a fence is enough?

>
> >
> > Thanks,
> >
> > Alex
> >
> > >  }
> > >
> > >  void flush_icache_pte(pte_t pte);
> > > --
> > > 2.40.1
> > >
>
>
>
> --
> Best Regards
>  Guo Ren
Guo Ren Dec. 11, 2023, 11:36 a.m. UTC | #5
On Mon, Dec 11, 2023 at 5:04 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
>
> On Mon, Dec 11, 2023 at 9:41 AM Guo Ren <guoren@kernel.org> wrote:
> >
> > On Mon, Dec 11, 2023 at 1:52 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
> > >
> > > Hi Guo,
> > >
> > > On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
> > > >
> > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > >
> > > > When changing from an invalid pte to a valid one for a kernel page,
> > > > there is no need for tlb_flush. It's okay for the TSO memory model, but
> > > > there is an OoO risk for the Weak one. eg:
> > > >
> > > > sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> > > > ...
> > > > ld t1, (a1) // a1 = va of above pte
> > > >
> > > > If the ld instruction is executed speculatively before the sd
> > > > instruction. Then it would bring an invalid entry into the TLB, and when
> > > > the ld instruction retired, a spurious page fault occurred. Because the
> > > > vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> > > > cause kernel panic.
> > > >
> > > > This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> > > > used for page table modifications"). For RISC-V, there is no requirement
> > > > in the spec to guarantee all tlb entries are valid and no requirement to
> > > > PTW filter out invalid entries. Of course, micro-arch could give a more
> > > > robust design, but here, use a software fence to guarantee.
> > > >
> > > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > > ---
> > > >  arch/riscv/include/asm/pgtable.h | 7 +++++++
> > > >  1 file changed, 7 insertions(+)
> > > >
> > > > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> > > > index 294044429e8e..2fae5a5438e0 100644
> > > > --- a/arch/riscv/include/asm/pgtable.h
> > > > +++ b/arch/riscv/include/asm/pgtable.h
> > > > @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
> > > >  static inline void set_pte(pte_t *ptep, pte_t pteval)
> > > >  {
> > > >         *ptep = pteval;
> > > > +
> > > > +       /*
> > > > +        * Only if the new pte is present and kernel, otherwise TLB
> > > > +        * maintenance or update_mmu_cache() have the necessary barriers.
> > > > +        */
> > > > +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> > > > +               RISCV_FENCE(rw,rw);
> > >
> > > Only a sfence.vma can guarantee that the PTW actually sees a new
> > > mapping, a fence is not enough. That being said, new kernel mappings
> > > (vmalloc ones) are correctly handled in the kernel by using
> > > flush_cache_vmap(). Did you observe something that this patch fixes?
> > Thx for the reply!
> >
> > The sfence.vma is too expensive, so the situation is tricky. See the
> > arm64 commit: 7f0b1bf04511 ("arm64: Fix barriers used for page table
> > modifications"), which is similar. That is, linux assumes invalid pte
> > won't get into TLB. Think about memory hotplug:
> >
> > mm/sparse.c: sparse_add_section() {
> > ...
> >         memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap);
> >         if (IS_ERR(memmap))
> >                 return PTR_ERR(memmap);
> >
> >         /*
> >          * Poison uninitialized struct pages in order to catch invalid flags
> >          * combinations.
> >          */
> >         page_init_poison(memmap, sizeof(struct page) * nr_pages);
> > ...
> > }
> > The section_activate would use set_pte to setup vmemmap, and
> > page_init_poison would access these pages' struct.
>
> So I think the generic code must be fixed by adding a
> flush_cache_vmap() in vmemmap_populate_range() or similar: several
> architectures implement flush_cache_vmap() because they need to do
> "something" after a new mapping is established, so vmemmap should not
> be any different.
Perhaps generic code assumes TLB won't contain invalid entries. When
invalid -> valid, Linux won't do any tlb_flush, ref:

* Use set_p*_safe(), and elide TLB flushing, when confident that *no*
 * TLB flush will be required as a result of the "set". For example, use
 * in scenarios where it is known ahead of time that the routine is
 * setting non-present entries, or re-setting an existing entry to the
 * same value. Otherwise, use the typical "set" helpers and flush the
 * TLB.

>
> >
> > That means:
> > sd t0, (a0) // a0 = struct page's pte address, pteval is changed from
> > invalid to valid
> >  ...
> > lw/sw t1, (a1) // a1 = va of struct page
> >
> > If the lw/sw instruction is executed speculatively before the set_pte,
> > we need a fence to prevent this.
>
> Yes I agree, but to me we need the fence property of sfence.vma to
> make sure the PTW sees the new pte, unless I'm mistaken and something
> in the privileged specification states that a fence is enough?
All PTW are triggered by IFU & load/store. For the "set" scenarios, we
just need to prevent the access va before the set_pte. So:
 - Don't worry about IFU, which fetches the code sequentially.
 - Use a fence prevent load/store before set_pte.

Sfence.vma is used for invalidate TLB, not for invalid -> valid.

>
> >
> > >
> > > Thanks,
> > >
> > > Alex
> > >
> > > >  }
> > > >
> > > >  void flush_icache_pte(pte_t pte);
> > > > --
> > > > 2.40.1
> > > >
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
Samuel Holland Dec. 11, 2023, 3:27 p.m. UTC | #6
On 2023-12-11 5:36 AM, Guo Ren wrote:
> On Mon, Dec 11, 2023 at 5:04 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
>>
>> On Mon, Dec 11, 2023 at 9:41 AM Guo Ren <guoren@kernel.org> wrote:
>>>
>>> On Mon, Dec 11, 2023 at 1:52 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
>>>>
>>>> Hi Guo,
>>>>
>>>> On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
>>>>>
>>>>> From: Guo Ren <guoren@linux.alibaba.com>
>>>>>
>>>>> When changing from an invalid pte to a valid one for a kernel page,
>>>>> there is no need for tlb_flush. It's okay for the TSO memory model, but
>>>>> there is an OoO risk for the Weak one. eg:
>>>>>
>>>>> sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
>>>>> ...
>>>>> ld t1, (a1) // a1 = va of above pte
>>>>>
>>>>> If the ld instruction is executed speculatively before the sd
>>>>> instruction. Then it would bring an invalid entry into the TLB, and when
>>>>> the ld instruction retired, a spurious page fault occurred. Because the
>>>>> vmemmap has been ignored by vmalloc_fault, the spurious page fault would
>>>>> cause kernel panic.
>>>>>
>>>>> This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
>>>>> used for page table modifications"). For RISC-V, there is no requirement
>>>>> in the spec to guarantee all tlb entries are valid and no requirement to
>>>>> PTW filter out invalid entries. Of course, micro-arch could give a more
>>>>> robust design, but here, use a software fence to guarantee.
>>>>>
>>>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
>>>>> Signed-off-by: Guo Ren <guoren@kernel.org>
>>>>> ---
>>>>>  arch/riscv/include/asm/pgtable.h | 7 +++++++
>>>>>  1 file changed, 7 insertions(+)
>>>>>
>>>>> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
>>>>> index 294044429e8e..2fae5a5438e0 100644
>>>>> --- a/arch/riscv/include/asm/pgtable.h
>>>>> +++ b/arch/riscv/include/asm/pgtable.h
>>>>> @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
>>>>>  static inline void set_pte(pte_t *ptep, pte_t pteval)
>>>>>  {
>>>>>         *ptep = pteval;
>>>>> +
>>>>> +       /*
>>>>> +        * Only if the new pte is present and kernel, otherwise TLB
>>>>> +        * maintenance or update_mmu_cache() have the necessary barriers.
>>>>> +        */
>>>>> +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
>>>>> +               RISCV_FENCE(rw,rw);
>>>>
>>>> Only a sfence.vma can guarantee that the PTW actually sees a new
>>>> mapping, a fence is not enough. That being said, new kernel mappings
>>>> (vmalloc ones) are correctly handled in the kernel by using
>>>> flush_cache_vmap(). Did you observe something that this patch fixes?
>>> Thx for the reply!
>>>
>>> The sfence.vma is too expensive, so the situation is tricky. See the
>>> arm64 commit: 7f0b1bf04511 ("arm64: Fix barriers used for page table
>>> modifications"), which is similar. That is, linux assumes invalid pte
>>> won't get into TLB. Think about memory hotplug:
>>>
>>> mm/sparse.c: sparse_add_section() {
>>> ...
>>>         memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap);
>>>         if (IS_ERR(memmap))
>>>                 return PTR_ERR(memmap);
>>>
>>>         /*
>>>          * Poison uninitialized struct pages in order to catch invalid flags
>>>          * combinations.
>>>          */
>>>         page_init_poison(memmap, sizeof(struct page) * nr_pages);
>>> ...
>>> }
>>> The section_activate would use set_pte to setup vmemmap, and
>>> page_init_poison would access these pages' struct.
>>
>> So I think the generic code must be fixed by adding a
>> flush_cache_vmap() in vmemmap_populate_range() or similar: several
>> architectures implement flush_cache_vmap() because they need to do
>> "something" after a new mapping is established, so vmemmap should not
>> be any different.
> Perhaps generic code assumes TLB won't contain invalid entries. When
> invalid -> valid, Linux won't do any tlb_flush, ref:
> 
> * Use set_p*_safe(), and elide TLB flushing, when confident that *no*
>  * TLB flush will be required as a result of the "set". For example, use
>  * in scenarios where it is known ahead of time that the routine is
>  * setting non-present entries, or re-setting an existing entry to the
>  * same value. Otherwise, use the typical "set" helpers and flush the
>  * TLB.
> 
>>
>>>
>>> That means:
>>> sd t0, (a0) // a0 = struct page's pte address, pteval is changed from
>>> invalid to valid
>>>  ...
>>> lw/sw t1, (a1) // a1 = va of struct page
>>>
>>> If the lw/sw instruction is executed speculatively before the set_pte,
>>> we need a fence to prevent this.
>>
>> Yes I agree, but to me we need the fence property of sfence.vma to
>> make sure the PTW sees the new pte, unless I'm mistaken and something
>> in the privileged specification states that a fence is enough?
> All PTW are triggered by IFU & load/store. For the "set" scenarios, we
> just need to prevent the access va before the set_pte. So:
>  - Don't worry about IFU, which fetches the code sequentially.
>  - Use a fence prevent load/store before set_pte.
> 
> Sfence.vma is used for invalidate TLB, not for invalid -> valid.

I think the problem is that, architecturally, you can't prevent a PTW by
preventing access to the virtual address. The RISC-V privileged spec allows
caching the results of PTWs from speculative execution, and it allows caching
invalid PTEs. So effectively, as soon as satp is written, software must be able
to handle _any_ virtual address being in the TLB.

To avoid the sfence.vma in the invalid->valid case, you need to handle the
possible page fault, like in Alex's series here:

https://lore.kernel.org/linux-riscv/20231207150348.82096-1-alexghiti@rivosinc.com/

Regards,
Samuel
Guo Ren Dec. 12, 2023, 3:07 a.m. UTC | #7
On Mon, Dec 11, 2023 at 11:27 PM Samuel Holland
<samuel.holland@sifive.com> wrote:
>
> On 2023-12-11 5:36 AM, Guo Ren wrote:
> > On Mon, Dec 11, 2023 at 5:04 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
> >>
> >> On Mon, Dec 11, 2023 at 9:41 AM Guo Ren <guoren@kernel.org> wrote:
> >>>
> >>> On Mon, Dec 11, 2023 at 1:52 PM Alexandre Ghiti <alexghiti@rivosinc.com> wrote:
> >>>>
> >>>> Hi Guo,
> >>>>
> >>>> On Fri, Dec 8, 2023 at 4:10 PM <guoren@kernel.org> wrote:
> >>>>>
> >>>>> From: Guo Ren <guoren@linux.alibaba.com>
> >>>>>
> >>>>> When changing from an invalid pte to a valid one for a kernel page,
> >>>>> there is no need for tlb_flush. It's okay for the TSO memory model, but
> >>>>> there is an OoO risk for the Weak one. eg:
> >>>>>
> >>>>> sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> >>>>> ...
> >>>>> ld t1, (a1) // a1 = va of above pte
> >>>>>
> >>>>> If the ld instruction is executed speculatively before the sd
> >>>>> instruction. Then it would bring an invalid entry into the TLB, and when
> >>>>> the ld instruction retired, a spurious page fault occurred. Because the
> >>>>> vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> >>>>> cause kernel panic.
> >>>>>
> >>>>> This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> >>>>> used for page table modifications"). For RISC-V, there is no requirement
> >>>>> in the spec to guarantee all tlb entries are valid and no requirement to
> >>>>> PTW filter out invalid entries. Of course, micro-arch could give a more
> >>>>> robust design, but here, use a software fence to guarantee.
> >>>>>
> >>>>> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> >>>>> Signed-off-by: Guo Ren <guoren@kernel.org>
> >>>>> ---
> >>>>>  arch/riscv/include/asm/pgtable.h | 7 +++++++
> >>>>>  1 file changed, 7 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> >>>>> index 294044429e8e..2fae5a5438e0 100644
> >>>>> --- a/arch/riscv/include/asm/pgtable.h
> >>>>> +++ b/arch/riscv/include/asm/pgtable.h
> >>>>> @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
> >>>>>  static inline void set_pte(pte_t *ptep, pte_t pteval)
> >>>>>  {
> >>>>>         *ptep = pteval;
> >>>>> +
> >>>>> +       /*
> >>>>> +        * Only if the new pte is present and kernel, otherwise TLB
> >>>>> +        * maintenance or update_mmu_cache() have the necessary barriers.
> >>>>> +        */
> >>>>> +       if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> >>>>> +               RISCV_FENCE(rw,rw);
> >>>>
> >>>> Only a sfence.vma can guarantee that the PTW actually sees a new
> >>>> mapping, a fence is not enough. That being said, new kernel mappings
> >>>> (vmalloc ones) are correctly handled in the kernel by using
> >>>> flush_cache_vmap(). Did you observe something that this patch fixes?
> >>> Thx for the reply!
> >>>
> >>> The sfence.vma is too expensive, so the situation is tricky. See the
> >>> arm64 commit: 7f0b1bf04511 ("arm64: Fix barriers used for page table
> >>> modifications"), which is similar. That is, linux assumes invalid pte
> >>> won't get into TLB. Think about memory hotplug:
> >>>
> >>> mm/sparse.c: sparse_add_section() {
> >>> ...
> >>>         memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap);
> >>>         if (IS_ERR(memmap))
> >>>                 return PTR_ERR(memmap);
> >>>
> >>>         /*
> >>>          * Poison uninitialized struct pages in order to catch invalid flags
> >>>          * combinations.
> >>>          */
> >>>         page_init_poison(memmap, sizeof(struct page) * nr_pages);
> >>> ...
> >>> }
> >>> The section_activate would use set_pte to setup vmemmap, and
> >>> page_init_poison would access these pages' struct.
> >>
> >> So I think the generic code must be fixed by adding a
> >> flush_cache_vmap() in vmemmap_populate_range() or similar: several
> >> architectures implement flush_cache_vmap() because they need to do
> >> "something" after a new mapping is established, so vmemmap should not
> >> be any different.
> > Perhaps generic code assumes TLB won't contain invalid entries. When
> > invalid -> valid, Linux won't do any tlb_flush, ref:
> >
> > * Use set_p*_safe(), and elide TLB flushing, when confident that *no*
> >  * TLB flush will be required as a result of the "set". For example, use
> >  * in scenarios where it is known ahead of time that the routine is
> >  * setting non-present entries, or re-setting an existing entry to the
> >  * same value. Otherwise, use the typical "set" helpers and flush the
> >  * TLB.
> >
> >>
> >>>
> >>> That means:
> >>> sd t0, (a0) // a0 = struct page's pte address, pteval is changed from
> >>> invalid to valid
> >>>  ...
> >>> lw/sw t1, (a1) // a1 = va of struct page
> >>>
> >>> If the lw/sw instruction is executed speculatively before the set_pte,
> >>> we need a fence to prevent this.
> >>
> >> Yes I agree, but to me we need the fence property of sfence.vma to
> >> make sure the PTW sees the new pte, unless I'm mistaken and something
> >> in the privileged specification states that a fence is enough?
> > All PTW are triggered by IFU & load/store. For the "set" scenarios, we
> > just need to prevent the access va before the set_pte. So:
> >  - Don't worry about IFU, which fetches the code sequentially.
> >  - Use a fence prevent load/store before set_pte.
> >
> > Sfence.vma is used for invalidate TLB, not for invalid -> valid.
>
> I think the problem is that, architecturally, you can't prevent a PTW by
> preventing access to the virtual address. The RISC-V privileged spec allows
> caching the results of PTWs from speculative execution, and it allows caching
> invalid PTEs. So effectively, as soon as satp is written, software must be able
> to handle _any_ virtual address being in the TLB.
>
> To avoid the sfence.vma in the invalid->valid case, you need to handle the
> possible page fault, like in Alex's series here:
>
> https://lore.kernel.org/linux-riscv/20231207150348.82096-1-alexghiti@rivosinc.com/
Just as this patch series said:

+ * The RISC-V kernel does not eagerly emit a sfence.vma after each
+ * new vmalloc mapping, which may result in exceptions:
+ * - if the uarch caches invalid entries, the new mapping would not be
+ *   observed by the page table walker and an invalidation is needed.
+ * - if the uarch does not cache invalid entries, a reordered access
+ *   could "miss" the new mapping and traps: in that case, we only need
+ *   to retry the access, no sfence.vma is required.

I'm talking about "uarch does not cache invalid entries, a reordered
access could "miss" the new mapping and traps".
Using a fence in set_pte is another solution, and better than
retrying. Of course the premise is that the fence is not expensive for
micro-arch.

 - Arm64 used this way: commit: 7f0b1bf04511 ("arm64: Fix barriers
 - X86 is similar, because it's TSO, so any load + store instructions
between set_pte & next access would give a barrier, eg:
   set_pte va
   load (acquire)
   store (release)
   load/store va


Another topic is about "retry the access", this is about kernel
virtual address space spurious page fault right? And Alex is
preventing that in the riscv linux kernel and it would cause a lot of
side effects.

>
> Regards,
> Samuel
>
diff mbox series

Patch

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 294044429e8e..2fae5a5438e0 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -511,6 +511,13 @@  static inline int pte_same(pte_t pte_a, pte_t pte_b)
 static inline void set_pte(pte_t *ptep, pte_t pteval)
 {
 	*ptep = pteval;
+
+	/*
+	 * Only if the new pte is present and kernel, otherwise TLB
+	 * maintenance or update_mmu_cache() have the necessary barriers.
+	 */
+	if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
+		RISCV_FENCE(rw,rw);
 }
 
 void flush_icache_pte(pte_t pte);