diff mbox series

[v4,09/18] KVM: arm64: Allocate shared pKVM hyp stacktrace buffers

Message ID 20220715061027.1612149-10-kaleshsingh@google.com (mailing list archive)
State New, archived
Headers show
Series KVM nVHE Hypervisor stack unwinder | expand

Commit Message

Kalesh Singh July 15, 2022, 6:10 a.m. UTC
In protected nVHE mode the host cannot directly access
hypervisor memory, so we will dump the hypervisor stacktrace
to a shared buffer with the host.

The minimum size do the buffer required, assuming the min frame
size of [x29, x30] (2 * sizeof(long)), is half the combined size of
the hypervisor and overflow stacks plus an additional entry to
delimit the end of the stacktrace.

The stacktrace buffers are used later in the seried to dump the
nVHE hypervisor stacktrace when using protected-mode.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/arm64/include/asm/memory.h      | 7 +++++++
 arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++
 2 files changed, 11 insertions(+)

Comments

Marc Zyngier July 18, 2022, 7:13 a.m. UTC | #1
On Fri, 15 Jul 2022 07:10:18 +0100,
Kalesh Singh <kaleshsingh@google.com> wrote:
> 
> In protected nVHE mode the host cannot directly access
> hypervisor memory, so we will dump the hypervisor stacktrace
> to a shared buffer with the host.
> 
> The minimum size do the buffer required, assuming the min frame

s/do/for/ ?

> size of [x29, x30] (2 * sizeof(long)), is half the combined size of
> the hypervisor and overflow stacks plus an additional entry to
> delimit the end of the stacktrace.

Let me see if I understand this: the maximum stack size is the
combination of the HYP and overflow stacks, and the smallest possible
stack frame is 128bit (only FP+LR). The buffer thus needs to provide
one 64bit entry per stack frame that fits in the combined stack, plus
one entry as an end marker.

So the resulting size is half of the combined stack size, plus a
single 64bit word. Is this correct?

> 
> The stacktrace buffers are used later in the seried to dump the
> nVHE hypervisor stacktrace when using protected-mode.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> ---
>  arch/arm64/include/asm/memory.h      | 7 +++++++
>  arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 0af70d9abede..28a4893d4b84 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -113,6 +113,13 @@
>  
>  #define OVERFLOW_STACK_SIZE	SZ_4K
>  
> +/*
> + * With the minimum frame size of [x29, x30], exactly half the combined
> + * sizes of the hyp and overflow stacks is needed to save the unwinded
> + * stacktrace; plus an additional entry to delimit the end.
> + */
> +#define NVHE_STACKTRACE_SIZE	((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long))
> +
>  /*
>   * Alignment of kernel segments (e.g. .text, .data).
>   *
> diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> index a3d5b34e1249..69e65b457f1c 100644
> --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> @@ -9,3 +9,7 @@
>  
>  DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
>  	__aligned(16);
> +
> +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
> +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
> +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */

OK, so the allocation exists even if KVM is not running in protected
mode. I guess this is OK for now, but definitely reinforces my request
that this is only there when compiled for debug mode.

Thanks,

	M.
Fuad Tabba July 18, 2022, 10 a.m. UTC | #2
Hi Kalesh,

On Fri, Jul 15, 2022 at 7:11 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> In protected nVHE mode the host cannot directly access
> hypervisor memory, so we will dump the hypervisor stacktrace
> to a shared buffer with the host.
>
> The minimum size do the buffer required, assuming the min frame
> size of [x29, x30] (2 * sizeof(long)), is half the combined size of
> the hypervisor and overflow stacks plus an additional entry to
> delimit the end of the stacktrace.
>
> The stacktrace buffers are used later in the seried to dump the
> nVHE hypervisor stacktrace when using protected-mode.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> ---
>  arch/arm64/include/asm/memory.h      | 7 +++++++
>  arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++
>  2 files changed, 11 insertions(+)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 0af70d9abede..28a4893d4b84 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -113,6 +113,13 @@
>
>  #define OVERFLOW_STACK_SIZE    SZ_4K
>
> +/*
> + * With the minimum frame size of [x29, x30], exactly half the combined
> + * sizes of the hyp and overflow stacks is needed to save the unwinded
> + * stacktrace; plus an additional entry to delimit the end.
> + */
> +#define NVHE_STACKTRACE_SIZE   ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long))

I do find this computation to be a bit confusing, especially with the
addition of the entry to delimit the end. Especially when considering
that in patch 12, where you add pkvm_save_backtrace_entry(), you need
to compensate for it again.

Not sure what the best way is, having two definitions, or something
like that, with one for the size and one for the delimiter.

Thanks,
/fuad

> +
>  /*
>   * Alignment of kernel segments (e.g. .text, .data).
>   *
> diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> index a3d5b34e1249..69e65b457f1c 100644
> --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> @@ -9,3 +9,7 @@
>
>  DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
>         __aligned(16);
> +
> +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
> +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
> +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
> --
> 2.37.0.170.g444d1eabd0-goog
>
Kalesh Singh July 18, 2022, 5:27 p.m. UTC | #3
On Mon, Jul 18, 2022 at 12:13 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 15 Jul 2022 07:10:18 +0100,
> Kalesh Singh <kaleshsingh@google.com> wrote:
> >
> > In protected nVHE mode the host cannot directly access
> > hypervisor memory, so we will dump the hypervisor stacktrace
> > to a shared buffer with the host.
> >
> > The minimum size do the buffer required, assuming the min frame
>
> s/do/for/ ?
Ack

>
> > size of [x29, x30] (2 * sizeof(long)), is half the combined size of
> > the hypervisor and overflow stacks plus an additional entry to
> > delimit the end of the stacktrace.
>
> Let me see if I understand this: the maximum stack size is the
> combination of the HYP and overflow stacks, and the smallest possible
> stack frame is 128bit (only FP+LR). The buffer thus needs to provide
> one 64bit entry per stack frame that fits in the combined stack, plus
> one entry as an end marker.
>
> So the resulting size is half of the combined stack size, plus a
> single 64bit word. Is this correct?

That understanding is correct. So for the 64 KB pages is slightly more
than half a page (overflow stack is 4KB).

>
> >
> > The stacktrace buffers are used later in the seried to dump the
> > nVHE hypervisor stacktrace when using protected-mode.
> >
> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > ---
> >  arch/arm64/include/asm/memory.h      | 7 +++++++
> >  arch/arm64/kvm/hyp/nvhe/stacktrace.c | 4 ++++
> >  2 files changed, 11 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 0af70d9abede..28a4893d4b84 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -113,6 +113,13 @@
> >
> >  #define OVERFLOW_STACK_SIZE  SZ_4K
> >
> > +/*
> > + * With the minimum frame size of [x29, x30], exactly half the combined
> > + * sizes of the hyp and overflow stacks is needed to save the unwinded
> > + * stacktrace; plus an additional entry to delimit the end.
> > + */
> > +#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long))
> > +
> >  /*
> >   * Alignment of kernel segments (e.g. .text, .data).
> >   *
> > diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> > index a3d5b34e1249..69e65b457f1c 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
> > @@ -9,3 +9,7 @@
> >
> >  DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
> >       __aligned(16);
> > +
> > +#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
> > +DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
> > +#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
>
> OK, so the allocation exists even if KVM is not running in protected
> mode. I guess this is OK for now, but definitely reinforces my request
> that this is only there when compiled for debug mode.
>

Yes, but in the case you aren't running protected mode you can avoid
it by setting PROTECTED_NVHE_STACKTRACE=n.

Thanks,
Kalesh

> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 0af70d9abede..28a4893d4b84 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -113,6 +113,13 @@ 
 
 #define OVERFLOW_STACK_SIZE	SZ_4K
 
+/*
+ * With the minimum frame size of [x29, x30], exactly half the combined
+ * sizes of the hyp and overflow stacks is needed to save the unwinded
+ * stacktrace; plus an additional entry to delimit the end.
+ */
+#define NVHE_STACKTRACE_SIZE	((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long))
+
 /*
  * Alignment of kernel segments (e.g. .text, .data).
  *
diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
index a3d5b34e1249..69e65b457f1c 100644
--- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
+++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
@@ -9,3 +9,7 @@ 
 
 DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
 	__aligned(16);
+
+#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
+DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
+#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */