diff mbox series

[v2,2/7] accel/tcg: suppress IRQ check for special TBs

Message ID 20211125154144.2904741-3-alex.bennee@linaro.org (mailing list archive)
State New, archived
Headers show
Series more tcg, plugin, test and build fixes | expand

Commit Message

Alex Bennée Nov. 25, 2021, 3:41 p.m. UTC
When we set cpu->cflags_next_tb it is because we want to carefully
control the execution of the next TB. Currently there is a race that
causes the second stage of watchpoint handling to get ignored if an
IRQ is processed before we finish executing the instruction that
triggers the watchpoint. Use the new CF_NOIRQ facility to avoid the
race.

We also suppress IRQs when handling precise self modifying code to
avoid unnecessary bouncing.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/245

---
v2
  - split the CF_NOIRQ implementation
  - only apply CF_NOIRQ for watchpoints/SMC handling
  - minor reword of commit
---
 accel/tcg/cpu-exec.c      | 9 +++++++++
 accel/tcg/translate-all.c | 2 +-
 softmmu/physmem.c         | 2 +-
 3 files changed, 11 insertions(+), 2 deletions(-)

Comments

Richard Henderson Nov. 26, 2021, 10:39 a.m. UTC | #1
On 11/25/21 4:41 PM, Alex Bennée wrote:
> @@ -1738,7 +1738,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
>       if (current_tb_modified) {
>           page_collection_unlock(pages);
>           /* Force execution of one insn next time.  */
> -        cpu->cflags_next_tb = 1 | curr_cflags(cpu);
> +        cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
>           mmap_unlock();
>           cpu_loop_exit_noexc(cpu);
>       }

There's another instance in tb_invalidate_phys_page.

> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 314f8b439c..b43f92e900 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -946,7 +946,7 @@ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
>                   cpu_loop_exit(cpu);
>               } else {
>                   /* Force execution of one insn next time.  */
> -                cpu->cflags_next_tb = 1 | CF_LAST_IO | curr_cflags(cpu);
> +                cpu->cflags_next_tb = 1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(cpu);
>                   mmap_unlock();
>                   cpu_loop_exit_noexc(cpu);
>               }

And a second instance in this function.


r~
Alex Bennée Nov. 29, 2021, 11:33 a.m. UTC | #2
Richard Henderson <richard.henderson@linaro.org> writes:

> On 11/25/21 4:41 PM, Alex Bennée wrote:
>> @@ -1738,7 +1738,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
>>       if (current_tb_modified) {
>>           page_collection_unlock(pages);
>>           /* Force execution of one insn next time.  */
>> -        cpu->cflags_next_tb = 1 | curr_cflags(cpu);
>> +        cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
>>           mmap_unlock();
>>           cpu_loop_exit_noexc(cpu);
>>       }
>
> There's another instance in tb_invalidate_phys_page.
>
>> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
>> index 314f8b439c..b43f92e900 100644
>> --- a/softmmu/physmem.c
>> +++ b/softmmu/physmem.c
>> @@ -946,7 +946,7 @@ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
>>                   cpu_loop_exit(cpu);
>>               } else {
>>                   /* Force execution of one insn next time.  */
>> -                cpu->cflags_next_tb = 1 | CF_LAST_IO | curr_cflags(cpu);
>> +                cpu->cflags_next_tb = 1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(cpu);
>>                   mmap_unlock();
>>                   cpu_loop_exit_noexc(cpu);
>>               }
>
> And a second instance in this function.

I had skipped this one as icount was in effect but I guess it can't hurt.
diff mbox series

Patch

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 2d14d02f6c..409ec8c38c 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -721,6 +721,15 @@  static inline bool need_replay_interrupt(int interrupt_request)
 static inline bool cpu_handle_interrupt(CPUState *cpu,
                                         TranslationBlock **last_tb)
 {
+    /*
+     * If we have requested custom cflags with CF_NOIRQ we should
+     * skip checking here. Any pending interrupts will get picked up
+     * by the next TB we execute under normal cflags.
+     */
+    if (cpu->cflags_next_tb != -1 && cpu->cflags_next_tb & CF_NOIRQ) {
+        return false;
+    }
+
     /* Clear the interrupt flag now since we're processing
      * cpu->interrupt_request and cpu->exit_request.
      * Ensure zeroing happens before reading cpu->exit_request or
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index bd0bb81d08..1cd06572de 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -1738,7 +1738,7 @@  tb_invalidate_phys_page_range__locked(struct page_collection *pages,
     if (current_tb_modified) {
         page_collection_unlock(pages);
         /* Force execution of one insn next time.  */
-        cpu->cflags_next_tb = 1 | curr_cflags(cpu);
+        cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
         mmap_unlock();
         cpu_loop_exit_noexc(cpu);
     }
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 314f8b439c..b43f92e900 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -946,7 +946,7 @@  void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
                 cpu_loop_exit(cpu);
             } else {
                 /* Force execution of one insn next time.  */
-                cpu->cflags_next_tb = 1 | CF_LAST_IO | curr_cflags(cpu);
+                cpu->cflags_next_tb = 1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(cpu);
                 mmap_unlock();
                 cpu_loop_exit_noexc(cpu);
             }