Message ID | 20240715095702.1222213-3-ethan84@andestech.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Support RISC-V IOPMP | expand |
On Mon, Jul 15, 2024 at 7:59 PM Ethan Chen via <qemu-devel@nongnu.org> wrote: > > If the IOMMU granularity is smaller than the TARGET_PAGE size, there may be > multiple entries within the same page. To obtain the correct result, pass > the original address to the IOMMU. > > Similar to the RISC-V PMP solution, the TLB_INVALID_MASK will be set when > there are multiple entries in the same page, ensuring that the IOMMU is > checked on every access. > > Signed-off-by: Ethan Chen <ethan84@andestech.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Alistair > --- > accel/tcg/cputlb.c | 20 ++++++++++++++++---- > system/physmem.c | 4 ++++ > 2 files changed, 20 insertions(+), 4 deletions(-) > > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c > index edb3715017..7df106fea3 100644 > --- a/accel/tcg/cputlb.c > +++ b/accel/tcg/cputlb.c > @@ -1062,8 +1062,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, > > prot = full->prot; > asidx = cpu_asidx_from_attrs(cpu, full->attrs); > - section = address_space_translate_for_iotlb(cpu, asidx, paddr_page, > + section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr, > &xlat, &sz, full->attrs, &prot); > + /* Update page size */ > + full->lg_page_size = ctz64(sz); > + if (full->lg_page_size > TARGET_PAGE_BITS) { > + full->lg_page_size = TARGET_PAGE_BITS; > + } else { > + sz = TARGET_PAGE_SIZE; > + } > + > + is_ram = memory_region_is_ram(section->mr); > + is_romd = memory_region_is_romd(section->mr); > + /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */ > + if (is_ram || is_romd) { > + xlat &= TARGET_PAGE_MASK; > + } > + > assert(sz >= TARGET_PAGE_SIZE); > > tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx > @@ -1076,9 +1091,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, > read_flags |= TLB_INVALID_MASK; > } > > - is_ram = memory_region_is_ram(section->mr); > - is_romd = memory_region_is_romd(section->mr); > - > if (is_ram || is_romd) { > /* RAM and ROMD both have associated host memory. */ > addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat; > diff --git a/system/physmem.c b/system/physmem.c > index 2154432cb6..346b015447 100644 > --- a/system/physmem.c > +++ b/system/physmem.c > @@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, > iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx); > addr = ((iotlb.translated_addr & ~iotlb.addr_mask) > | (addr & iotlb.addr_mask)); > + /* Update size */ > + if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) { > + *plen = iotlb.addr_mask + 1; > + } > /* Update the caller's prot bits to remove permissions the IOMMU > * is giving us a failure response for. If we get down to no > * permissions left at all we can give up now. > -- > 2.34.1 > >
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index edb3715017..7df106fea3 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1062,8 +1062,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, prot = full->prot; asidx = cpu_asidx_from_attrs(cpu, full->attrs); - section = address_space_translate_for_iotlb(cpu, asidx, paddr_page, + section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr, &xlat, &sz, full->attrs, &prot); + /* Update page size */ + full->lg_page_size = ctz64(sz); + if (full->lg_page_size > TARGET_PAGE_BITS) { + full->lg_page_size = TARGET_PAGE_BITS; + } else { + sz = TARGET_PAGE_SIZE; + } + + is_ram = memory_region_is_ram(section->mr); + is_romd = memory_region_is_romd(section->mr); + /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */ + if (is_ram || is_romd) { + xlat &= TARGET_PAGE_MASK; + } + assert(sz >= TARGET_PAGE_SIZE); tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx @@ -1076,9 +1091,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, read_flags |= TLB_INVALID_MASK; } - is_ram = memory_region_is_ram(section->mr); - is_romd = memory_region_is_romd(section->mr); - if (is_ram || is_romd) { /* RAM and ROMD both have associated host memory. */ addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat; diff --git a/system/physmem.c b/system/physmem.c index 2154432cb6..346b015447 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx); addr = ((iotlb.translated_addr & ~iotlb.addr_mask) | (addr & iotlb.addr_mask)); + /* Update size */ + if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) { + *plen = iotlb.addr_mask + 1; + } /* Update the caller's prot bits to remove permissions the IOMMU * is giving us a failure response for. If we get down to no * permissions left at all we can give up now.
If the IOMMU granularity is smaller than the TARGET_PAGE size, there may be multiple entries within the same page. To obtain the correct result, pass the original address to the IOMMU. Similar to the RISC-V PMP solution, the TLB_INVALID_MASK will be set when there are multiple entries in the same page, ensuring that the IOMMU is checked on every access. Signed-off-by: Ethan Chen <ethan84@andestech.com> --- accel/tcg/cputlb.c | 20 ++++++++++++++++---- system/physmem.c | 4 ++++ 2 files changed, 20 insertions(+), 4 deletions(-)