diff mbox series

[3/3] x86/sev-es: Improve comments in and around __sev_es_ist_enter/exit()

Message ID 20210217120143.6106-4-joro@8bytes.org (mailing list archive)
State New, archived
Headers show
Series x86/sev-es: Check for trusted regs->sp in __sev_es_ist_enter() | expand

Commit Message

Joerg Roedel Feb. 17, 2021, 12:01 p.m. UTC
From: Joerg Roedel <jroedel@suse.de>

Better explain why this code is necessary and what it is doing.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

Comments

Borislav Petkov Feb. 17, 2021, 6 p.m. UTC | #1
On Wed, Feb 17, 2021 at 01:01:43PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Better explain why this code is necessary and what it is doing.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/kernel/sev-es.c | 23 ++++++++++++++++-------
>  1 file changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
> index 0df38b185d53..79241bc45f25 100644
> --- a/arch/x86/kernel/sev-es.c
> +++ b/arch/x86/kernel/sev-es.c
> @@ -127,14 +127,20 @@ static __always_inline bool on_vc_stack(unsigned long sp)
>  }
>  
>  /*
> - * This function handles the case when an NMI is raised in the #VC exception
> - * handler entry code. In this case, the IST entry for #VC must be adjusted, so
> - * that any subsequent #VC exception will not overwrite the stack contents of the
> - * interrupted #VC handler.
> + * This function handles the case when an NMI is raised in the #VC
> + * exception handler entry code, before the #VC handler has switched off
> + * its IST stack. In this case, the IST entry for #VC must be adjusted,
> + * so that any nested #VC exception will not overwrite the stack
> + * contents of the interrupted #VC handler.
>   *
>   * The IST entry is adjusted unconditionally so that it can be also be
> - * unconditionally adjusted back in sev_es_ist_exit(). Otherwise a nested
> - * sev_es_ist_exit() call may adjust back the IST entry too early.
> + * unconditionally adjusted back in __sev_es_ist_exit(). Otherwise a
> + * nested sev_es_ist_exit() call may adjust back the IST entry too
> + * early.
> + *
> + * The __sev_es_ist_enter() and __sev_es_ist_exit() functions always run
> + * on the NMI IST stack, as they are only called from NMI handling code
> + * right now.
>   */
>  void noinstr __sev_es_ist_enter(struct pt_regs *regs)
>  {
> @@ -143,7 +149,10 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs)
>  	/* Read old IST entry */
>  	old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
>  
> -	/* Make room on the IST stack */
> +	/*
> +	 * Make room on the IST stack - Reserve 8 bytes to store the old
> +	 * IST entry.
> +	 */
>  	if (on_vc_stack(regs->sp) &&
>  	    !user_mode(regs) &&
>  	    !from_syscall_gap(regs))
> -- 

Yah, and then we probably should simplify this __sev_es_ist_enter()
function even more as it is not easy to grok.

For example, the ALIGN_DOWN(regs->sp, 8) is not really needed, right?

Also, both branches do "- sizeof(old_ist);" so you can just as well do
it unconditionally.

And the sizeof(old_ist) is just a confusing way to write 8, right? We're
64-bit only so there's no need for that, I'd say.

And then you probably should change the comments from

	/* Store old IST entry */

and

	/* Set new IST entry */

to something like:

 /*
  * If on the #VC IST stack, new_ist gets set to point one stack slot
  * further down from the #VC interrupt frame which has been pushed on
  * it during the first #VC exception entry.
  *
  * If not, simply the next slot on the #VC IST stack is set to point...

and here I'm not even sure why we're doing it?

The else branch, when we're not on the #VC stack, why are we doing

	new_ist = old_ist - sizeof(old_ist);

?

I mean, if the NMI handler causes a #VC exception, it will simply run on
the #VC IST stack so why do we have to do that - 8 thing at all?

Thx.
diff mbox series

Patch

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 0df38b185d53..79241bc45f25 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -127,14 +127,20 @@  static __always_inline bool on_vc_stack(unsigned long sp)
 }
 
 /*
- * This function handles the case when an NMI is raised in the #VC exception
- * handler entry code. In this case, the IST entry for #VC must be adjusted, so
- * that any subsequent #VC exception will not overwrite the stack contents of the
- * interrupted #VC handler.
+ * This function handles the case when an NMI is raised in the #VC
+ * exception handler entry code, before the #VC handler has switched off
+ * its IST stack. In this case, the IST entry for #VC must be adjusted,
+ * so that any nested #VC exception will not overwrite the stack
+ * contents of the interrupted #VC handler.
  *
  * The IST entry is adjusted unconditionally so that it can be also be
- * unconditionally adjusted back in sev_es_ist_exit(). Otherwise a nested
- * sev_es_ist_exit() call may adjust back the IST entry too early.
+ * unconditionally adjusted back in __sev_es_ist_exit(). Otherwise a
+ * nested sev_es_ist_exit() call may adjust back the IST entry too
+ * early.
+ *
+ * The __sev_es_ist_enter() and __sev_es_ist_exit() functions always run
+ * on the NMI IST stack, as they are only called from NMI handling code
+ * right now.
  */
 void noinstr __sev_es_ist_enter(struct pt_regs *regs)
 {
@@ -143,7 +149,10 @@  void noinstr __sev_es_ist_enter(struct pt_regs *regs)
 	/* Read old IST entry */
 	old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
 
-	/* Make room on the IST stack */
+	/*
+	 * Make room on the IST stack - Reserve 8 bytes to store the old
+	 * IST entry.
+	 */
 	if (on_vc_stack(regs->sp) &&
 	    !user_mode(regs) &&
 	    !from_syscall_gap(regs))