Message ID | a3b3b05f429a50b5ea061f4e8385d4f8d6d77d58.1595689201.git.zong.li@sifive.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Fix some PMP implementations | expand |
On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote: > > The real physical address should add the 12 bits page offset. It also > causes the PMP wrong checking due to the minimum granularity of PMP is > 4 byte, but we always get the physical address which is 4KB alignment, > that means, we always use the start address of the page to check PMP for > all addresses which in the same page. So riscv_cpu_tlb_fill() will clear these bits when calling tlb_set_page(), so this won't have an impact on actual translation (although it will change in input address for 2-stage translation, but that seems fine). Your point about PMP seems correct as we allow a smaller then page granularity this seems like the right approach. Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at the end? Otherwise we will break what callers to cpu_get_phys_page_attrs_debug() expect. Alistair > > Signed-off-by: Zong Li <zong.li@sifive.com> > --- > target/riscv/cpu_helper.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c > index 75d2ae3434..08b069f0c9 100644 > --- a/target/riscv/cpu_helper.c > +++ b/target/riscv/cpu_helper.c > @@ -543,7 +543,8 @@ restart: > /* for superpage mappings, make a fake leaf PTE for the TLB's > benefit. */ > target_ulong vpn = addr >> PGSHIFT; > - *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT; > + *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) | > + (addr & ~TARGET_PAGE_MASK); > > /* set permissions on the TLB entry */ > if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) { > -- > 2.27.0 > >
On Tue, Jul 28, 2020 at 6:49 AM Alistair Francis <alistair23@gmail.com> wrote: > > On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote: > > > > The real physical address should add the 12 bits page offset. It also > > causes the PMP wrong checking due to the minimum granularity of PMP is > > 4 byte, but we always get the physical address which is 4KB alignment, > > that means, we always use the start address of the page to check PMP for > > all addresses which in the same page. > > So riscv_cpu_tlb_fill() will clear these bits when calling > tlb_set_page(), so this won't have an impact on actual translation > (although it will change in input address for 2-stage translation, but > that seems fine). > > Your point about PMP seems correct as we allow a smaller then page > granularity this seems like the right approach. > > Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at > the end? Otherwise we will break what callers to > cpu_get_phys_page_attrs_debug() expect. > OK, I checked that already, the callers would add these bits again, because they expect to get the address for the page. Thanks for your reviewing, modify it in the next version. > Alistair > > > > > Signed-off-by: Zong Li <zong.li@sifive.com> > > --- > > target/riscv/cpu_helper.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c > > index 75d2ae3434..08b069f0c9 100644 > > --- a/target/riscv/cpu_helper.c > > +++ b/target/riscv/cpu_helper.c > > @@ -543,7 +543,8 @@ restart: > > /* for superpage mappings, make a fake leaf PTE for the TLB's > > benefit. */ > > target_ulong vpn = addr >> PGSHIFT; > > - *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT; > > + *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) | > > + (addr & ~TARGET_PAGE_MASK); > > > > /* set permissions on the TLB entry */ > > if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) { > > -- > > 2.27.0 > > > >
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index 75d2ae3434..08b069f0c9 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -543,7 +543,8 @@ restart: /* for superpage mappings, make a fake leaf PTE for the TLB's benefit. */ target_ulong vpn = addr >> PGSHIFT; - *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT; + *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) | + (addr & ~TARGET_PAGE_MASK); /* set permissions on the TLB entry */ if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
The real physical address should add the 12 bits page offset. It also causes the PMP wrong checking due to the minimum granularity of PMP is 4 byte, but we always get the physical address which is 4KB alignment, that means, we always use the start address of the page to check PMP for all addresses which in the same page. Signed-off-by: Zong Li <zong.li@sifive.com> --- target/riscv/cpu_helper.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)