diff mbox series

[1/3] accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper

Message ID 20240215150133.2088-2-Jonathan.Cameron@huawei.com (mailing list archive)
State New, archived
Headers show
Series tcg/i386: Page tables in MMIO memory fixes (CXL) | expand

Commit Message

Jonathan Cameron Feb. 15, 2024, 3:01 p.m. UTC
From: Peter Maydell <peter.maydell@linaro.org>

Peter posted this in the thread trying to fix x86 TCG handling
of page tables in MMIO space (specifically emulated CXL interleaved memory)
https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t

Peter, are you happy to give your SoB on this one?

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 accel/tcg/cpu-exec.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Peter Maydell Feb. 15, 2024, 3:11 p.m. UTC | #1
On Thu, 15 Feb 2024 at 15:02, Jonathan Cameron via
<qemu-devel@nongnu.org> wrote:
>
> From: Peter Maydell <peter.maydell@linaro.org>
>
> Peter posted this in the thread trying to fix x86 TCG handling
> of page tables in MMIO space (specifically emulated CXL interleaved memory)
> https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t
>
> Peter, are you happy to give your SoB on this one?
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
>  accel/tcg/cpu-exec.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> index 977576ca14..52239a441f 100644
> --- a/accel/tcg/cpu-exec.c
> +++ b/accel/tcg/cpu-exec.c
> @@ -396,6 +396,14 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
>      uint64_t cs_base;
>      uint32_t flags, cflags;
>
> +    /*
> +     * By definition we've just finished a TB, so I/O is OK.
> +     * Avoid the possibility of calling cpu_io_recompile() if
> +     * a page table walk triggered by tb_lookup() calling
> +     * probe_access_internal() happens to touch an MMIO device.
> +     * The next TB, if we chain to it, will clear the flag again.
> +     */
> +    cpu->neg.can_do_io = true;
>      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>
>      cflags = curr_cflags(cpu);
> --

Happy to provide a
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

but I'd appreciate RTH's review to confirm this is the right
way to deal with the problem.

thanks
-- PMM
Jonathan Cameron Feb. 15, 2024, 4:04 p.m. UTC | #2
On Thu, 15 Feb 2024 15:11:17 +0000
Peter Maydell <peter.maydell@linaro.org> wrote:

> On Thu, 15 Feb 2024 at 15:02, Jonathan Cameron via
> <qemu-devel@nongnu.org> wrote:
> >
> > From: Peter Maydell <peter.maydell@linaro.org>
> >
> > Peter posted this in the thread trying to fix x86 TCG handling
> > of page tables in MMIO space (specifically emulated CXL interleaved memory)
> > https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t
> >
> > Peter, are you happy to give your SoB on this one?
> >

Thanks, I'll also add a summary of your description of why there is
a bug based on your email to v2 as the above doesn't really
provide any useful info :( 

If a page table is in IO memory and lookup_tb_ptr probes
the TLB it can result in a page table walk for the instruction
fetch.  If this hits IO memory and io_prepare falsely assumes
it needs to do a TLB recompile.
Avoid that by setting can_do_io at the start of lookup_tb_ptr.


> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > ---
> >  accel/tcg/cpu-exec.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> > index 977576ca14..52239a441f 100644
> > --- a/accel/tcg/cpu-exec.c
> > +++ b/accel/tcg/cpu-exec.c
> > @@ -396,6 +396,14 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
> >      uint64_t cs_base;
> >      uint32_t flags, cflags;
> >
> > +    /*
> > +     * By definition we've just finished a TB, so I/O is OK.
> > +     * Avoid the possibility of calling cpu_io_recompile() if
> > +     * a page table walk triggered by tb_lookup() calling
> > +     * probe_access_internal() happens to touch an MMIO device.
> > +     * The next TB, if we chain to it, will clear the flag again.
> > +     */
> > +    cpu->neg.can_do_io = true;
> >      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
> >
> >      cflags = curr_cflags(cpu);
> > --  
> 
> Happy to provide a
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> 
> but I'd appreciate RTH's review to confirm this is the right
> way to deal with the problem.
> 
> thanks
> -- PMM
Richard Henderson Feb. 15, 2024, 7:11 p.m. UTC | #3
On 2/15/24 05:01, Jonathan Cameron wrote:
> From: Peter Maydell <peter.maydell@linaro.org>
> 
> Peter posted this in the thread trying to fix x86 TCG handling
> of page tables in MMIO space (specifically emulated CXL interleaved memory)
> https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t
> 
> Peter, are you happy to give your SoB on this one?
> 
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
>   accel/tcg/cpu-exec.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> index 977576ca14..52239a441f 100644
> --- a/accel/tcg/cpu-exec.c
> +++ b/accel/tcg/cpu-exec.c
> @@ -396,6 +396,14 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
>       uint64_t cs_base;
>       uint32_t flags, cflags;
>   
> +    /*
> +     * By definition we've just finished a TB, so I/O is OK.
> +     * Avoid the possibility of calling cpu_io_recompile() if
> +     * a page table walk triggered by tb_lookup() calling
> +     * probe_access_internal() happens to touch an MMIO device.
> +     * The next TB, if we chain to it, will clear the flag again.
> +     */
> +    cpu->neg.can_do_io = true;

Yes, this mirrors what is done in cpu_loop_exit(), and I can see how get_page_addr_code() 
and the ptw within would require can_do_io set properly.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~
diff mbox series

Patch

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 977576ca14..52239a441f 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -396,6 +396,14 @@  const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
     uint64_t cs_base;
     uint32_t flags, cflags;
 
+    /*
+     * By definition we've just finished a TB, so I/O is OK.
+     * Avoid the possibility of calling cpu_io_recompile() if
+     * a page table walk triggered by tb_lookup() calling
+     * probe_access_internal() happens to touch an MMIO device.
+     * The next TB, if we chain to it, will clear the flag again.
+     */
+    cpu->neg.can_do_io = true;
     cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
 
     cflags = curr_cflags(cpu);