diff mbox series

[v7,02/45] x86/sev: detect/setup SEV/SME features earlier in boot

Message ID 20211110220731.2396491-3-brijesh.singh@amd.com (mailing list archive)
State New
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Nov. 10, 2021, 10:06 p.m. UTC
From: Michael Roth <michael.roth@amd.com>

sme_enable() handles feature detection for both SEV and SME. Future
patches will also use it for SEV-SNP feature detection/setup, which
will need to be done immediately after the first #VC handler is set up.
Move it now in preparation.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/setup.h | 2 +-
 arch/x86/kernel/head64.c     | 8 ++++----
 2 files changed, 5 insertions(+), 5 deletions(-)

Comments

Borislav Petkov Nov. 15, 2021, 7:12 p.m. UTC | #1
On Wed, Nov 10, 2021 at 04:06:48PM -0600, Brijesh Singh wrote:
> From: Michael Roth <michael.roth@amd.com>
> 
> sme_enable() handles feature detection for both SEV and SME. Future
> patches will also use it for SEV-SNP feature detection/setup, which
> will need to be done immediately after the first #VC handler is set up.
> Move it now in preparation.

I don't mind the move - what I miss is the reason why you're moving it
up.

Thx.
Michael Roth Nov. 15, 2021, 8:17 p.m. UTC | #2
On Mon, Nov 15, 2021 at 08:12:09PM +0100, Borislav Petkov wrote:
> On Wed, Nov 10, 2021 at 04:06:48PM -0600, Brijesh Singh wrote:
> > From: Michael Roth <michael.roth@amd.com>
> > 
> > sme_enable() handles feature detection for both SEV and SME. Future
> > patches will also use it for SEV-SNP feature detection/setup, which
> > will need to be done immediately after the first #VC handler is set up.
> > Move it now in preparation.
> 
> I don't mind the move - what I miss is the reason why you're moving it
> up.

The early #VC handlers are needed mainly so that early cpuid instructions
can be handled for SEV-ES. In the case of SNP, the cpuid table needs to be
set up at the same time so those #VC handlers can handle cpuid lookups for
SEV-SNP guests. Previously in v6, that CPUID table setup was done with a
separate routine, snp_cpuid_init(), which was awkward because between
the point snp_cpuid_init() was called, and sme_enable() was called, we
were in an in-between state where some SEV-ES/SEV-SNP features were in
use, but they weren't actually sanity-checked against SEV status MSR and
CPUID bits until later in sme_enable(). I tried adding some of those checks
from sme_enable() into snp_cpuid_init(), but you'd suggested instead moving
the CPUID table setup into sme_enable() to avoid the duplication:

https://lore.kernel.org/linux-mm/20211027151325.v3w3nghq5z2o5dto@amd.com/

but in order for that to happen soon enough to make use of the CPUID
table for all CPUID intructions, it needs to be moved to just after the first
#VC handler is setup (where snp_cpuid_init() used to be in v6).

As for why CPUID table needs to be used for all CPUID, it's mainly so
that the CPUID values used throughout boot can be attested to later
if the guest owner validates the CPUID values in the CPUID page. I
added some documentation for why this is the case in:

 [PATCH 33/45] KVM: SEV: Add documentation for SEV-SNP CPUID Enforcement

> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7C29ab37203c1a4c796bdc08d9a86bd7a4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637726003477466910%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=xxIMC4PM7kJ2O79gQKK7I%2BhnOsuEbckVPA9Gicz0S9w%3D&amp;reserved=0
Borislav Petkov Nov. 17, 2021, 1:11 p.m. UTC | #3
On Mon, Nov 15, 2021 at 02:17:15PM -0600, Michael Roth wrote:
> but in order for that to happen soon enough to make use of the CPUID
> table for all CPUID intructions, it needs to be moved to just after the first
> #VC handler is setup (where snp_cpuid_init() used to be in v6).

So, it needs to happen after the initial IDT is loaded on the BSP in
startup_64_setup_env().

So why don't you call sme_enable() right after the
startup_64_setup_env() call and add a comment above it to explain why
this call needs to happen there?

Instead of sticking that call in startup_64_setup_env() where it doesn't
belong conceptually - enabling SME doesn't really have anything to do
with setting up early environment...

Hmm.
Venu Busireddy Dec. 6, 2021, 11:47 p.m. UTC | #4
On 2021-11-10 16:06:48 -0600, Brijesh Singh wrote:
> From: Michael Roth <michael.roth@amd.com>
> 
> sme_enable() handles feature detection for both SEV and SME. Future
> patches will also use it for SEV-SNP feature detection/setup, which
> will need to be done immediately after the first #VC handler is set up.
> Move it now in preparation.

The previous patch added the function sev_enable(), which has most of its
code duplicated in sme_enable(). Can sme_enable() be changed (and maybe
sev_enable() a bit) to call sev_enable() and avoid all the duplicate code?

Venu

> 
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/setup.h | 2 +-
>  arch/x86/kernel/head64.c     | 8 ++++----
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
> index a12458a7a8d4..cee1e816fdcd 100644
> --- a/arch/x86/include/asm/setup.h
> +++ b/arch/x86/include/asm/setup.h
> @@ -50,7 +50,7 @@ extern void reserve_standard_io_resources(void);
>  extern void i386_reserve_resources(void);
>  extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
>  extern unsigned long __startup_secondary_64(void);
> -extern void startup_64_setup_env(unsigned long physbase);
> +extern void startup_64_setup_env(unsigned long physbase, struct boot_params *bp);
>  extern void early_setup_idt(void);
>  extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
>  
> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
> index fc5371a7e9d1..4eb83ae7ceb8 100644
> --- a/arch/x86/kernel/head64.c
> +++ b/arch/x86/kernel/head64.c
> @@ -163,9 +163,6 @@ unsigned long __head __startup_64(unsigned long physaddr,
>  	if (load_delta & ~PMD_PAGE_MASK)
>  		for (;;);
>  
> -	/* Activate Secure Memory Encryption (SME) if supported and enabled */
> -	sme_enable(bp);
> -
>  	/* Include the SME encryption mask in the fixup value */
>  	load_delta += sme_get_me_mask();
>  
> @@ -594,7 +591,7 @@ void early_setup_idt(void)
>  /*
>   * Setup boot CPU state needed before kernel switches to virtual addresses.
>   */
> -void __head startup_64_setup_env(unsigned long physbase)
> +void __head startup_64_setup_env(unsigned long physbase, struct boot_params *bp)
>  {
>  	/* Load GDT */
>  	startup_gdt_descr.address = (unsigned long)fixup_pointer(startup_gdt, physbase);
> @@ -606,4 +603,7 @@ void __head startup_64_setup_env(unsigned long physbase)
>  		     "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory");
>  
>  	startup_64_load_idt(physbase);
> +
> +	/* Activate SEV/SME memory encryption if supported/enabled. */
> +	sme_enable(bp);
>  }
> -- 
> 2.25.1
>
diff mbox series

Patch

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index a12458a7a8d4..cee1e816fdcd 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -50,7 +50,7 @@  extern void reserve_standard_io_resources(void);
 extern void i386_reserve_resources(void);
 extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
 extern unsigned long __startup_secondary_64(void);
-extern void startup_64_setup_env(unsigned long physbase);
+extern void startup_64_setup_env(unsigned long physbase, struct boot_params *bp);
 extern void early_setup_idt(void);
 extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
 
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index fc5371a7e9d1..4eb83ae7ceb8 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -163,9 +163,6 @@  unsigned long __head __startup_64(unsigned long physaddr,
 	if (load_delta & ~PMD_PAGE_MASK)
 		for (;;);
 
-	/* Activate Secure Memory Encryption (SME) if supported and enabled */
-	sme_enable(bp);
-
 	/* Include the SME encryption mask in the fixup value */
 	load_delta += sme_get_me_mask();
 
@@ -594,7 +591,7 @@  void early_setup_idt(void)
 /*
  * Setup boot CPU state needed before kernel switches to virtual addresses.
  */
-void __head startup_64_setup_env(unsigned long physbase)
+void __head startup_64_setup_env(unsigned long physbase, struct boot_params *bp)
 {
 	/* Load GDT */
 	startup_gdt_descr.address = (unsigned long)fixup_pointer(startup_gdt, physbase);
@@ -606,4 +603,7 @@  void __head startup_64_setup_env(unsigned long physbase)
 		     "movl %%eax, %%es\n" : : "a"(__KERNEL_DS) : "memory");
 
 	startup_64_load_idt(physbase);
+
+	/* Activate SEV/SME memory encryption if supported/enabled. */
+	sme_enable(bp);
 }