diff mbox series

[for-4.17] Revert "x86/HVM: also dump stacks from show_execution_state()"

Message ID 20221104144337.36844-1-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series [for-4.17] Revert "x86/HVM: also dump stacks from show_execution_state()" | expand

Commit Message

Roger Pau Monné Nov. 4, 2022, 2:43 p.m. UTC
This reverts commit adb715db698bc8ec3b88c24eb88b21e9da5b6c07.

The dumping of stacks for HVM guests is problematic, since it requires
taking the p2m lock in order to walk the guest page tables and the
p2m.

The suggested solution to the issue is to introduce and use a lockless
p2m walker, that relies on being executed with interrupts disabled in
order to prevent any p2m pages from being freed while doing the
walk.

Note that modifications of p2m entries are already done atomically in
order to prevent the hardware walker from seeing partially updated
values.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/include/asm/processor.h |  1 +
 xen/arch/x86/traps.c                 | 35 ++++++++--------------------
 2 files changed, 11 insertions(+), 25 deletions(-)

Comments

Andrew Cooper Nov. 4, 2022, 2:58 p.m. UTC | #1
On 04/11/2022 14:43, Roger Pau Monne wrote:
> This reverts commit adb715db698bc8ec3b88c24eb88b21e9da5b6c07.
>
> The dumping of stacks for HVM guests is problematic, since it requires
> taking the p2m lock in order to walk the guest page tables and the
> p2m.
>
> The suggested solution to the issue is to introduce and use a lockless
> p2m walker, that relies on being executed with interrupts disabled in
> order to prevent any p2m pages from being freed while doing the
> walk.
>
> Note that modifications of p2m entries are already done atomically in
> order to prevent the hardware walker from seeing partially updated
> values.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, as agreed on the
x86 maintainers call.

~Andrew
Henry Wang Nov. 4, 2022, 3:02 p.m. UTC | #2
Hi Roger,

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Subject: [PATCH for-4.17] Revert "x86/HVM: also dump stacks from
> show_execution_state()"
> 
> This reverts commit adb715db698bc8ec3b88c24eb88b21e9da5b6c07.
> 
> The dumping of stacks for HVM guests is problematic, since it requires
> taking the p2m lock in order to walk the guest page tables and the
> p2m.
> 
> The suggested solution to the issue is to introduce and use a lockless
> p2m walker, that relies on being executed with interrupts disabled in
> order to prevent any p2m pages from being freed while doing the
> walk.
> 
> Note that modifications of p2m entries are already done atomically in
> order to prevent the hardware walker from seeing partially updated
> values.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Release-acked-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry
diff mbox series

Patch

diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 8e2816fae9..4cdafe2c4d 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -493,6 +493,7 @@  static always_inline void rep_nop(void)
 #define cpu_relax() rep_nop()
 
 void show_code(const struct cpu_user_regs *regs);
+void show_stack(const struct cpu_user_regs *regs);
 void show_stack_overflow(unsigned int cpu, const struct cpu_user_regs *regs);
 void show_registers(const struct cpu_user_regs *regs);
 void show_execution_state(const struct cpu_user_regs *regs);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 5c0aabe8a3..c1e5ef1cc4 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -278,6 +278,10 @@  static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
     unsigned long mask = STACK_SIZE;
     void *stack_page = NULL;
 
+    /* Avoid HVM as we don't know what the stack looks like. */
+    if ( is_hvm_vcpu(v) )
+        return;
+
     if ( is_pv_32bit_vcpu(v) )
     {
         compat_show_guest_stack(v, regs, debug_stack_lines);
@@ -586,11 +590,14 @@  static void show_trace(const struct cpu_user_regs *regs)
     printk("\n");
 }
 
-static void show_stack(const struct cpu_user_regs *regs)
+void show_stack(const struct cpu_user_regs *regs)
 {
     unsigned long *stack = ESP_BEFORE_EXCEPTION(regs), *stack_bottom, addr;
     int i;
 
+    if ( guest_mode(regs) )
+        return show_guest_stack(current, regs);
+
     printk("Xen stack trace from "__OP"sp=%p:\n  ", stack);
 
     stack_bottom = _p(get_stack_dump_bottom(regs->rsp));
@@ -655,30 +662,8 @@  void show_execution_state(const struct cpu_user_regs *regs)
     unsigned long flags = console_lock_recursive_irqsave();
 
     show_registers(regs);
-
-    if ( guest_mode(regs) )
-    {
-        struct vcpu *curr = current;
-
-        if ( is_hvm_vcpu(curr) )
-        {
-            /*
-             * Stop interleaving prevention: The necessary P2M lookups
-             * involve locking, which has to occur with IRQs enabled.
-             */
-            console_unlock_recursive_irqrestore(flags);
-
-            show_hvm_stack(curr, regs);
-            return;
-        }
-
-        show_guest_stack(curr, regs);
-    }
-    else
-    {
-        show_code(regs);
-        show_stack(regs);
-    }
+    show_code(regs);
+    show_stack(regs);
 
     console_unlock_recursive_irqrestore(flags);
 }