diff mbox series

[v7,01/45] x86/compressed/64: detect/setup SEV/SME features earlier in boot

Message ID 20211110220731.2396491-2-brijesh.singh@amd.com (mailing list archive)
State New
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Nov. 10, 2021, 10:06 p.m. UTC
From: Michael Roth <michael.roth@amd.com>

With upcoming SEV-SNP support, SEV-related features need to be
initialized earlier in boot, at the same point the initial #VC handler
is set up, so that the SEV-SNP CPUID table can be utilized during the
initial feature checks. Also, SEV-SNP feature detection will rely on
EFI helper functions to scan the EFI config table for the Confidential
Computing blob, and so would need to be implemented at least partially
in C.

Currently set_sev_encryption_mask() is used to initialize the
sev_status and sme_me_mask globals that advertise what SEV/SME features
are available in a guest. Rename it to sev_enable() to better reflect
that (SME is only enabled in the case of SEV guests in the
boot/compressed kernel), and move it to just after the stage1 #VC
handler is set up so that it can be used to initialize SEV-SNP as well
in future patches.

While at it, re-implement it as C code so that all SEV feature
detection can be better consolidated with upcoming SEV-SNP feature
detection, which will also be in C.

The 32-bit entry path remains unchanged, as it never relied on the
set_sev_encryption_mask() initialization to begin with, possibly due to
the normal rva() helper for accessing globals only being usable by code
in .head.text. Either way, 32-bit entry for SEV-SNP would likely only
be supported for non-EFI boot paths, and so wouldn't rely on existing
EFI helper functions, and so could be handled by a separate/simpler
32-bit initializer in the future if needed.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/head_64.S     |  8 ++++-
 arch/x86/boot/compressed/mem_encrypt.S | 36 ---------------------
 arch/x86/boot/compressed/misc.h        |  4 +--
 arch/x86/boot/compressed/sev.c         | 45 ++++++++++++++++++++++++++
 4 files changed, 54 insertions(+), 39 deletions(-)

Comments

Borislav Petkov Nov. 12, 2021, 4:52 p.m. UTC | #1
On Wed, Nov 10, 2021 at 04:06:47PM -0600, Brijesh Singh wrote:
> +void sev_enable(struct boot_params *bp)
> +{
> +	unsigned int eax, ebx, ecx, edx;
> +
> +	/* Check for the SME/SEV support leaf */
> +	eax = 0x80000000;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	if (eax < 0x8000001f)
> +		return;
> +
> +	/*
> +	 * Check for the SME/SEV feature:
> +	 *   CPUID Fn8000_001F[EAX]
> +	 *   - Bit 0 - Secure Memory Encryption support
> +	 *   - Bit 1 - Secure Encrypted Virtualization support
> +	 *   CPUID Fn8000_001F[EBX]
> +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> +	 */
> +	eax = 0x8000001f;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	/* Check whether SEV is supported */
> +	if (!(eax & BIT(1)))
> +		return;
> +
> +	/* Check the SEV MSR whether SEV or SME is enabled */
> +	sev_status   = rd_sev_status_msr();
> +
> +	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
> +		error("SEV support indicated by CPUID, but not SEV status MSR.");

What is the practical purpose of this test?

> +	sme_me_mask = 1UL << (ebx & 0x3f);

	sme_me_mask = BIT_ULL(ebx & 0x3f);

Thx.
Michael Roth Nov. 12, 2021, 8:30 p.m. UTC | #2
On Fri, Nov 12, 2021 at 05:52:51PM +0100, Borislav Petkov wrote:
> On Wed, Nov 10, 2021 at 04:06:47PM -0600, Brijesh Singh wrote:
> > +void sev_enable(struct boot_params *bp)
> > +{
> > +	unsigned int eax, ebx, ecx, edx;
> > +
> > +	/* Check for the SME/SEV support leaf */
> > +	eax = 0x80000000;
> > +	ecx = 0;
> > +	native_cpuid(&eax, &ebx, &ecx, &edx);
> > +	if (eax < 0x8000001f)
> > +		return;
> > +
> > +	/*
> > +	 * Check for the SME/SEV feature:
> > +	 *   CPUID Fn8000_001F[EAX]
> > +	 *   - Bit 0 - Secure Memory Encryption support
> > +	 *   - Bit 1 - Secure Encrypted Virtualization support
> > +	 *   CPUID Fn8000_001F[EBX]
> > +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> > +	 */
> > +	eax = 0x8000001f;
> > +	ecx = 0;
> > +	native_cpuid(&eax, &ebx, &ecx, &edx);
> > +	/* Check whether SEV is supported */
> > +	if (!(eax & BIT(1)))
> > +		return;
> > +
> > +	/* Check the SEV MSR whether SEV or SME is enabled */
> > +	sev_status   = rd_sev_status_msr();
> > +
> > +	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
> > +		error("SEV support indicated by CPUID, but not SEV status MSR.");
> 
> What is the practical purpose of this test?

In the current QEMU/KVM implementation the SEV* CPUID bits are only
exposed for SEV guests, so this was more of a sanity check on that. But
looking at things more closely: that's more of a VMM-specific behavior
and isn't necessarily an invalid guest configuration as far as the spec
is concerned, so I think this check should be dropped.

> 
> > +	sme_me_mask = 1UL << (ebx & 0x3f);
> 
> 	sme_me_mask = BIT_ULL(ebx & 0x3f);

Will do.

Thanks,

Mike

> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7Ca6bf3479fffa4b5eee8b08d9a5fce2e2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637723327924654730%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SRy8YSe8a2njNc6IT8CGKv0hUefSOW55DJV%2Fi2Lhkic%3D&amp;reserved=0
Venu Busireddy Nov. 23, 2021, 9:55 p.m. UTC | #3
On 2021-11-12 14:30:01 -0600, Michael Roth wrote:
> On Fri, Nov 12, 2021 at 05:52:51PM +0100, Borislav Petkov wrote:
> > On Wed, Nov 10, 2021 at 04:06:47PM -0600, Brijesh Singh wrote:
> > > +void sev_enable(struct boot_params *bp)
> > > +{
> > > +	unsigned int eax, ebx, ecx, edx;
> > > +
> > > +	/* Check for the SME/SEV support leaf */
> > > +	eax = 0x80000000;
> > > +	ecx = 0;
> > > +	native_cpuid(&eax, &ebx, &ecx, &edx);
> > > +	if (eax < 0x8000001f)
> > > +		return;
> > > +
> > > +	/*
> > > +	 * Check for the SME/SEV feature:
> > > +	 *   CPUID Fn8000_001F[EAX]
> > > +	 *   - Bit 0 - Secure Memory Encryption support
> > > +	 *   - Bit 1 - Secure Encrypted Virtualization support
> > > +	 *   CPUID Fn8000_001F[EBX]
> > > +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> > > +	 */
> > > +	eax = 0x8000001f;
> > > +	ecx = 0;
> > > +	native_cpuid(&eax, &ebx, &ecx, &edx);
> > > +	/* Check whether SEV is supported */
> > > +	if (!(eax & BIT(1)))
> > > +		return;
> > > +
> > > +	/* Check the SEV MSR whether SEV or SME is enabled */
> > > +	sev_status   = rd_sev_status_msr();
> > > +
> > > +	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
> > > +		error("SEV support indicated by CPUID, but not SEV status MSR.");
> > 
> > What is the practical purpose of this test?
> 
> In the current QEMU/KVM implementation the SEV* CPUID bits are only
> exposed for SEV guests, so this was more of a sanity check on that. But
> looking at things more closely: that's more of a VMM-specific behavior
> and isn't necessarily an invalid guest configuration as far as the spec
> is concerned, so I think this check should be dropped.
> 
> > 
> > > +	sme_me_mask = 1UL << (ebx & 0x3f);
> > 
> > 	sme_me_mask = BIT_ULL(ebx & 0x3f);
> 
> Will do.

Also, could you please remove the references to set_sev_encryption_mask()
at lines 195 and 572? And perhaps reword those comments too?

Thanks,

Venu

> Thanks,
> 
> Mike
> 
> > 
> > Thx.
> > 
> > -- 
> > Regards/Gruss,
> >     Boris.
> > 
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7Ca6bf3479fffa4b5eee8b08d9a5fce2e2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637723327924654730%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SRy8YSe8a2njNc6IT8CGKv0hUefSOW55DJV%2Fi2Lhkic%3D&amp;reserved=0
diff mbox series

Patch

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 572c535cf45b..84a922c27e6b 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -447,6 +447,13 @@  SYM_CODE_START(startup_64)
 	call	load_stage1_idt
 	popq	%rsi
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	pushq	%rsi
+	movq	%rsi, %rdi		/* real mode address */
+	call	sev_enable
+	popq	%rsi
+#endif
+
 	/*
 	 * paging_prepare() sets up the trampoline and checks if we need to
 	 * enable 5-level paging.
@@ -569,7 +576,6 @@  SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
  * page-table.
  */
 	pushq	%rsi
-	call	set_sev_encryption_mask
 	call	load_stage2_idt
 
 	/* Pass boot_params to initialize_identity_maps() */
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index c1e81a848b2a..311d40f35a4b 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -187,42 +187,6 @@  SYM_CODE_END(startup32_vc_handler)
 	.code64
 
 #include "../../kernel/sev_verify_cbit.S"
-SYM_FUNC_START(set_sev_encryption_mask)
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-	push	%rbp
-	push	%rdx
-
-	movq	%rsp, %rbp		/* Save current stack pointer */
-
-	call	get_sev_encryption_bit	/* Get the encryption bit position */
-	testl	%eax, %eax
-	jz	.Lno_sev_mask
-
-	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
-
-	/*
-	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
-	 * get_sev_encryption_bit() because this function is 32-bit code and
-	 * shared between 64-bit and 32-bit boot path.
-	 */
-	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
-	rdmsr
-
-	/* Store MSR value in sev_status */
-	shlq	$32, %rdx
-	orq	%rdx, %rax
-	movq	%rax, sev_status(%rip)
-
-.Lno_sev_mask:
-	movq	%rbp, %rsp		/* Restore original stack pointer */
-
-	pop	%rdx
-	pop	%rbp
-#endif
-
-	xor	%rax, %rax
-	ret
-SYM_FUNC_END(set_sev_encryption_mask)
 
 	.data
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 16ed360b6692..23e0e395084a 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -120,12 +120,12 @@  static inline void console_init(void)
 { }
 #endif
 
-void set_sev_encryption_mask(void);
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+void sev_enable(struct boot_params *bp);
 void sev_es_shutdown_ghcb(void);
 extern bool sev_es_check_ghcb_fault(unsigned long address);
 #else
+static inline void sev_enable(struct boot_params *bp) { }
 static inline void sev_es_shutdown_ghcb(void) { }
 static inline bool sev_es_check_ghcb_fault(unsigned long address)
 {
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 670e998fe930..c91ad835b78e 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -204,3 +204,48 @@  void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	else if (result != ES_RETRY)
 		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
+
+static inline u64 rd_sev_status_msr(void)
+{
+	unsigned long low, high;
+
+	asm volatile("rdmsr" : "=a" (low), "=d" (high) :
+			"c" (MSR_AMD64_SEV));
+
+	return ((high << 32) | low);
+}
+
+void sev_enable(struct boot_params *bp)
+{
+	unsigned int eax, ebx, ecx, edx;
+
+	/* Check for the SME/SEV support leaf */
+	eax = 0x80000000;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	if (eax < 0x8000001f)
+		return;
+
+	/*
+	 * Check for the SME/SEV feature:
+	 *   CPUID Fn8000_001F[EAX]
+	 *   - Bit 0 - Secure Memory Encryption support
+	 *   - Bit 1 - Secure Encrypted Virtualization support
+	 *   CPUID Fn8000_001F[EBX]
+	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
+	 */
+	eax = 0x8000001f;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	/* Check whether SEV is supported */
+	if (!(eax & BIT(1)))
+		return;
+
+	/* Check the SEV MSR whether SEV or SME is enabled */
+	sev_status   = rd_sev_status_msr();
+
+	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
+		error("SEV support indicated by CPUID, but not SEV status MSR.");
+
+	sme_me_mask = 1UL << (ebx & 0x3f);
+}