diff mbox series

[v8,01/40] x86/compressed/64: detect/setup SEV/SME features earlier in boot

Message ID 20211210154332.11526-2-brijesh.singh@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand

Commit Message

Brijesh Singh Dec. 10, 2021, 3:42 p.m. UTC
From: Michael Roth <michael.roth@amd.com>

With upcoming SEV-SNP support, SEV-related features need to be
initialized earlier in boot, at the same point the initial #VC handler
is set up, so that the SEV-SNP CPUID table can be utilized during the
initial feature checks. Also, SEV-SNP feature detection will rely on
EFI helper functions to scan the EFI config table for the Confidential
Computing blob, and so would need to be implemented at least partially
in C.

Currently set_sev_encryption_mask() is used to initialize the
sev_status and sme_me_mask globals that advertise what SEV/SME features
are available in a guest. Rename it to sev_enable() to better reflect
that (SME is only enabled in the case of SEV guests in the
boot/compressed kernel), and move it to just after the stage1 #VC
handler is set up so that it can be used to initialize SEV-SNP as well
in future patches.

While at it, re-implement it as C code so that all SEV feature
detection can be better consolidated with upcoming SEV-SNP feature
detection, which will also be in C.

The 32-bit entry path remains unchanged, as it never relied on the
set_sev_encryption_mask() initialization to begin with, possibly due to
the normal rva() helper for accessing globals only being usable by code
in .head.text. Either way, 32-bit entry for SEV-SNP would likely only
be supported for non-EFI boot paths, and so wouldn't rely on existing
EFI helper functions, and so could be handled by a separate/simpler
32-bit initializer in the future if needed.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/head_64.S     | 32 ++++++++++--------
 arch/x86/boot/compressed/mem_encrypt.S | 36 ---------------------
 arch/x86/boot/compressed/misc.h        |  4 +--
 arch/x86/boot/compressed/sev.c         | 45 ++++++++++++++++++++++++++
 4 files changed, 66 insertions(+), 51 deletions(-)

Comments

Dave Hansen Dec. 10, 2021, 6:47 p.m. UTC | #1
On 12/10/21 7:42 AM, Brijesh Singh wrote:
> +	/* Set the SME mask if this is an SEV guest. */
> +	sev_status   = rd_sev_status_msr();

Nit: there's some weird extra whitespace there.  Might be some some old
attempts at vertical alignment.
Borislav Petkov Dec. 10, 2021, 7:12 p.m. UTC | #2
On Fri, Dec 10, 2021 at 09:42:53AM -0600, Brijesh Singh wrote:
> @@ -447,6 +446,23 @@ SYM_CODE_START(startup_64)
>  	call	load_stage1_idt
>  	popq	%rsi
>  
> +#ifdef CONFIG_AMD_MEM_ENCRYPT

I guess that ifdeffery is not needed.

> +	/*
> +	 * Now that the stage1 interrupt handlers are set up, #VC exceptions from
> +	 * CPUID instructions can be properly handled for SEV-ES guests.
> +	 *
> +	 * For SEV-SNP, the CPUID table also needs to be set up in advance of any
> +	 * CPUID instructions being issued, so go ahead and do that now via
> +	 * sev_enable(), which will also handle the rest of the SEV-related
> +	 * detection/setup to ensure that has been done in advance of any dependent
> +	 * code.
> +	 */
> +	pushq	%rsi
> +	movq	%rsi, %rdi		/* real mode address */
> +	call	sev_enable
> +	popq	%rsi
> +#endif
> +
>  	/*
>  	 * paging_prepare() sets up the trampoline and checks if we need to
>  	 * enable 5-level paging.

...

> +void sev_enable(struct boot_params *bp)
> +{
> +	unsigned int eax, ebx, ecx, edx;
> +
> +	/* Check for the SME/SEV support leaf */
> +	eax = 0x80000000;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	if (eax < 0x8000001f)
> +		return;
> +
> +	/*
> +	 * Check for the SME/SEV feature:
> +	 *   CPUID Fn8000_001F[EAX]
> +	 *   - Bit 0 - Secure Memory Encryption support
> +	 *   - Bit 1 - Secure Encrypted Virtualization support
> +	 *   CPUID Fn8000_001F[EBX]
> +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> +	 */
> +	eax = 0x8000001f;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	/* Check whether SEV is supported */
> +	if (!(eax & BIT(1)))
> +		return;
> +
> +	/* Set the SME mask if this is an SEV guest. */
> +	sev_status   = rd_sev_status_msr();
> +

^ Superfluous newline.

> +	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
> +		return;
> +
> +	sme_me_mask = BIT_ULL(ebx & 0x3f);
> +}
> -- 

Thx.
Dave Hansen Dec. 10, 2021, 7:23 p.m. UTC | #3
On 12/10/21 11:12 AM, Borislav Petkov wrote:
> On Fri, Dec 10, 2021 at 09:42:53AM -0600, Brijesh Singh wrote:
>> @@ -447,6 +446,23 @@ SYM_CODE_START(startup_64)
>>  	call	load_stage1_idt
>>  	popq	%rsi
>>  
>> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> 
> I guess that ifdeffery is not needed.

I think sev_enable() is only defined in arch/x86/boot/compressed/sev.c,
which is compiled via:

        vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o

So I think we either need the #ifdef or a stub for sev_enable()
somewhere else.

>> +	/*
>> +	 * Now that the stage1 interrupt handlers are set up, #VC exceptions from
>> +	 * CPUID instructions can be properly handled for SEV-ES guests.
>> +	 *
>> +	 * For SEV-SNP, the CPUID table also needs to be set up in advance of any
>> +	 * CPUID instructions being issued, so go ahead and do that now via
>> +	 * sev_enable(), which will also handle the rest of the SEV-related
>> +	 * detection/setup to ensure that has been done in advance of any dependent
>> +	 * code.
>> +	 */
>> +	pushq	%rsi
>> +	movq	%rsi, %rdi		/* real mode address */
>> +	call	sev_enable
>> +	popq	%rsi
>> +#endif
Borislav Petkov Dec. 10, 2021, 7:33 p.m. UTC | #4
On Fri, Dec 10, 2021 at 11:23:24AM -0800, Dave Hansen wrote:
> So I think we either need the #ifdef or a stub for sev_enable()
> somewhere else.

Yeah, there's a stub but in the C header so that won't work for asm
files. Forget what I said.

Thx.
Venu Busireddy Dec. 13, 2021, 7:09 p.m. UTC | #5
On 2021-12-10 09:42:53 -0600, Brijesh Singh wrote:
> From: Michael Roth <michael.roth@amd.com>
> 
> With upcoming SEV-SNP support, SEV-related features need to be
> initialized earlier in boot, at the same point the initial #VC handler
> is set up, so that the SEV-SNP CPUID table can be utilized during the
> initial feature checks. Also, SEV-SNP feature detection will rely on
> EFI helper functions to scan the EFI config table for the Confidential
> Computing blob, and so would need to be implemented at least partially
> in C.
> 
> Currently set_sev_encryption_mask() is used to initialize the
> sev_status and sme_me_mask globals that advertise what SEV/SME features
> are available in a guest. Rename it to sev_enable() to better reflect
> that (SME is only enabled in the case of SEV guests in the
> boot/compressed kernel), and move it to just after the stage1 #VC
> handler is set up so that it can be used to initialize SEV-SNP as well
> in future patches.
> 
> While at it, re-implement it as C code so that all SEV feature
> detection can be better consolidated with upcoming SEV-SNP feature
> detection, which will also be in C.
> 
> The 32-bit entry path remains unchanged, as it never relied on the
> set_sev_encryption_mask() initialization to begin with, possibly due to
> the normal rva() helper for accessing globals only being usable by code
> in .head.text. Either way, 32-bit entry for SEV-SNP would likely only
> be supported for non-EFI boot paths, and so wouldn't rely on existing
> EFI helper functions, and so could be handled by a separate/simpler
> 32-bit initializer in the future if needed.
> 
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/boot/compressed/head_64.S     | 32 ++++++++++--------
>  arch/x86/boot/compressed/mem_encrypt.S | 36 ---------------------
>  arch/x86/boot/compressed/misc.h        |  4 +--
>  arch/x86/boot/compressed/sev.c         | 45 ++++++++++++++++++++++++++
>  4 files changed, 66 insertions(+), 51 deletions(-)
> 
> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> index 572c535cf45b..20b174adca51 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -191,9 +191,8 @@ SYM_FUNC_START(startup_32)
>  	/*
>  	 * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
>  	 * will do a check. The sev_status memory will be fully initialized
> -	 * with the contents of MSR_AMD_SEV_STATUS later in
> -	 * set_sev_encryption_mask(). For now it is sufficient to know that SEV
> -	 * is active.
> +	 * with the contents of MSR_AMD_SEV_STATUS later via sev_enable(). For
> +	 * now it is sufficient to know that SEV is active.
>  	 */
>  	movl	$1, rva(sev_status)(%ebp)
>  1:
> @@ -447,6 +446,23 @@ SYM_CODE_START(startup_64)
>  	call	load_stage1_idt
>  	popq	%rsi
>  
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +	/*
> +	 * Now that the stage1 interrupt handlers are set up, #VC exceptions from
> +	 * CPUID instructions can be properly handled for SEV-ES guests.
> +	 *
> +	 * For SEV-SNP, the CPUID table also needs to be set up in advance of any
> +	 * CPUID instructions being issued, so go ahead and do that now via
> +	 * sev_enable(), which will also handle the rest of the SEV-related
> +	 * detection/setup to ensure that has been done in advance of any dependent
> +	 * code.
> +	 */
> +	pushq	%rsi
> +	movq	%rsi, %rdi		/* real mode address */
> +	call	sev_enable
> +	popq	%rsi
> +#endif
> +
>  	/*
>  	 * paging_prepare() sets up the trampoline and checks if we need to
>  	 * enable 5-level paging.
> @@ -559,17 +575,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>  	shrq	$3, %rcx
>  	rep	stosq
>  
> -/*
> - * If running as an SEV guest, the encryption mask is required in the
> - * page-table setup code below. When the guest also has SEV-ES enabled
> - * set_sev_encryption_mask() will cause #VC exceptions, but the stage2
> - * handler can't map its GHCB because the page-table is not set up yet.
> - * So set up the encryption mask here while still on the stage1 #VC
> - * handler. Then load stage2 IDT and switch to the kernel's own
> - * page-table.
> - */
>  	pushq	%rsi
> -	call	set_sev_encryption_mask
>  	call	load_stage2_idt
>  
>  	/* Pass boot_params to initialize_identity_maps() */
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index c1e81a848b2a..311d40f35a4b 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -187,42 +187,6 @@ SYM_CODE_END(startup32_vc_handler)
>  	.code64
>  
>  #include "../../kernel/sev_verify_cbit.S"
> -SYM_FUNC_START(set_sev_encryption_mask)
> -#ifdef CONFIG_AMD_MEM_ENCRYPT
> -	push	%rbp
> -	push	%rdx
> -
> -	movq	%rsp, %rbp		/* Save current stack pointer */
> -
> -	call	get_sev_encryption_bit	/* Get the encryption bit position */
> -	testl	%eax, %eax
> -	jz	.Lno_sev_mask
> -
> -	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
> -
> -	/*
> -	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
> -	 * get_sev_encryption_bit() because this function is 32-bit code and
> -	 * shared between 64-bit and 32-bit boot path.
> -	 */
> -	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
> -	rdmsr
> -
> -	/* Store MSR value in sev_status */
> -	shlq	$32, %rdx
> -	orq	%rdx, %rax
> -	movq	%rax, sev_status(%rip)
> -
> -.Lno_sev_mask:
> -	movq	%rbp, %rsp		/* Restore original stack pointer */
> -
> -	pop	%rdx
> -	pop	%rbp
> -#endif
> -
> -	xor	%rax, %rax
> -	ret
> -SYM_FUNC_END(set_sev_encryption_mask)
>  
>  	.data
>  
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 16ed360b6692..23e0e395084a 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -120,12 +120,12 @@ static inline void console_init(void)
>  { }
>  #endif
>  
> -void set_sev_encryption_mask(void);
> -
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
> +void sev_enable(struct boot_params *bp);
>  void sev_es_shutdown_ghcb(void);
>  extern bool sev_es_check_ghcb_fault(unsigned long address);
>  #else
> +static inline void sev_enable(struct boot_params *bp) { }
>  static inline void sev_es_shutdown_ghcb(void) { }
>  static inline bool sev_es_check_ghcb_fault(unsigned long address)
>  {
> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index 28bcf04c022e..8eebdf589a90 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -204,3 +204,48 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
>  	else if (result != ES_RETRY)
>  		sev_es_terminate(GHCB_SEV_ES_GEN_REQ);
>  }
> +
> +static inline u64 rd_sev_status_msr(void)
> +{
> +	unsigned long low, high;
> +
> +	asm volatile("rdmsr" : "=a" (low), "=d" (high) :
> +			"c" (MSR_AMD64_SEV));
> +
> +	return ((high << 32) | low);
> +}
> +
> +void sev_enable(struct boot_params *bp)
> +{
> +	unsigned int eax, ebx, ecx, edx;
> +
> +	/* Check for the SME/SEV support leaf */
> +	eax = 0x80000000;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	if (eax < 0x8000001f)
> +		return;
> +
> +	/*
> +	 * Check for the SME/SEV feature:
> +	 *   CPUID Fn8000_001F[EAX]
> +	 *   - Bit 0 - Secure Memory Encryption support
> +	 *   - Bit 1 - Secure Encrypted Virtualization support
> +	 *   CPUID Fn8000_001F[EBX]
> +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> +	 */
> +	eax = 0x8000001f;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	/* Check whether SEV is supported */
> +	if (!(eax & BIT(1)))
> +		return;
> +
> +	/* Set the SME mask if this is an SEV guest. */
> +	sev_status   = rd_sev_status_msr();
> +
> +	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
> +		return;
> +
> +	sme_me_mask = BIT_ULL(ebx & 0x3f);

I made this suggestion while reviewing v7 too, but it appears that it
fell through the cracks. Most of the code in sev_enable() is duplicated
from sme_enable(). Wouldn't it be better to put all that common code
in a different function, and call that function from sme_enable()
and sev_enable()?

Venu
Borislav Petkov Dec. 13, 2021, 7:17 p.m. UTC | #6
On Mon, Dec 13, 2021 at 01:09:19PM -0600, Venu Busireddy wrote:
> I made this suggestion while reviewing v7 too, but it appears that it
> fell through the cracks. Most of the code in sev_enable() is duplicated
> from sme_enable(). Wouldn't it be better to put all that common code
> in a different function, and call that function from sme_enable()
> and sev_enable()?

How about you look where both functions are defined? Which kernel stages?

And please trim your mails when you reply.
Venu Busireddy Dec. 14, 2021, 5:46 p.m. UTC | #7
On 2021-12-13 20:17:31 +0100, Borislav Petkov wrote:
> On Mon, Dec 13, 2021 at 01:09:19PM -0600, Venu Busireddy wrote:
> > I made this suggestion while reviewing v7 too, but it appears that it
> > fell through the cracks. Most of the code in sev_enable() is duplicated
> > from sme_enable(). Wouldn't it be better to put all that common code
> > in a different function, and call that function from sme_enable()
> > and sev_enable()?
> 
> How about you look where both functions are defined? Which kernel stages?

What I am suggesting should not have anything to do with the boot stage
of the kernel.

For example, both these functions call native_cpuid(), which is declared
as an inline function. I am merely suggesting to do something similar
to avoid the code duplication.

Venu
Borislav Petkov Dec. 14, 2021, 7:10 p.m. UTC | #8
On Tue, Dec 14, 2021 at 11:46:14AM -0600, Venu Busireddy wrote:
> What I am suggesting should not have anything to do with the boot stage
> of the kernel.

I know exactly what you're suggesting.

> For example, both these functions call native_cpuid(), which is declared
> as an inline function. I am merely suggesting to do something similar
> to avoid the code duplication.

Try it yourself. If you can come up with something halfway readable and
it builds, I'm willing to take a look.
Venu Busireddy Dec. 15, 2021, 12:14 a.m. UTC | #9
On 2021-12-14 20:10:16 +0100, Borislav Petkov wrote:
> On Tue, Dec 14, 2021 at 11:46:14AM -0600, Venu Busireddy wrote:
> > What I am suggesting should not have anything to do with the boot stage
> > of the kernel.
> 
> I know exactly what you're suggesting.
> 
> > For example, both these functions call native_cpuid(), which is declared
> > as an inline function. I am merely suggesting to do something similar
> > to avoid the code duplication.
> 
> Try it yourself. If you can come up with something halfway readable and
> it builds, I'm willing to take a look.

Patch (to be applied on top of sev-snp-v8 branch of
https://github.com/AMDESE/linux.git) is attached at the end.

Here are a few things I did.

1. Moved all the common code that existed at the begining of
   sme_enable() and sev_enable() to an inline function named
   get_pagetable_bit_pos().
2. sme_enable() was using AMD_SME_BIT and AMD_SEV_BIT, whereas
   sev_enable() was dealing with raw bits. Moved those definitions to
   sev.h, and changed sev_enable() to use those definitions.
3. Make consistent use of BIT_ULL.

Venu


diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index c2bf99522e5e..b44d6b37796e 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -291,6 +291,7 @@ static void enforce_vmpl0(void)
 void sev_enable(struct boot_params *bp)
 {
 	unsigned int eax, ebx, ecx, edx;
+	unsigned long pt_bit_pos;	/* Pagetable bit position */
 	bool snp;
 
 	/*
@@ -299,26 +300,8 @@ void sev_enable(struct boot_params *bp)
 	 */
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
-		return;
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV is supported */
-	if (!(eax & BIT(1))) {
+	/* Get the pagetable bit position if SEV is supported */
+	if ((get_pagetable_bit_pos(&pt_bit_pos, AMD_SEV_BIT)) < 0) {
 		if (snp)
 			error("SEV-SNP support indicated by CC blob, but not CPUID.");
 		return;
@@ -350,7 +333,7 @@ void sev_enable(struct boot_params *bp)
 	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
 		error("SEV-SNP supported indicated by CC blob, but not SEV status MSR.");
 
-	sme_me_mask = BIT_ULL(ebx & 0x3f);
+	sme_me_mask = BIT_ULL(pt_bit_pos);
 }
 
 /* Search for Confidential Computing blob in the EFI config table. */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 2c5f12ae7d04..41b096f28d02 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -224,6 +224,43 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
 	    : "memory");
 }
 
+/*
+ * Returns the pagetable bit position in pt_bit_pos,
+ * iff the specified features are supported.
+ */
+static inline int get_pagetable_bit_pos(unsigned long *pt_bit_pos,
+					unsigned long features)
+{
+	unsigned int eax, ebx, ecx, edx;
+
+	/* Check for the SME/SEV support leaf */
+	eax = 0x80000000;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	if (eax < 0x8000001f)
+		return -1;
+
+	eax = 0x8000001f;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+
+	/* Check whether the specified features are supported.
+	 * SME/SEV features:
+	 *   CPUID Fn8000_001F[EAX]
+	 *   - Bit 0 - Secure Memory Encryption support
+	 *   - Bit 1 - Secure Encrypted Virtualization support
+	 */
+	if (!(eax & features))
+		return -1;
+
+	/*
+	 *   CPUID Fn8000_001F[EBX]
+	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
+	 */
+	*pt_bit_pos = (unsigned long)(ebx & 0x3f);
+	return 0;
+}
+
 #define native_cpuid_reg(reg)					\
 static inline unsigned int native_cpuid_##reg(unsigned int op)	\
 {								\
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 7a5934af9d47..1a2344362ec6 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -17,6 +17,9 @@
 #define GHCB_PROTOCOL_MAX	2ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
+#define AMD_SME_BIT		BIT(0)
+#define AMD_SEV_BIT		BIT(1)
+
 #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
 
 enum es_result {
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 2f723e106ed3..1ef50e969efd 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -508,38 +508,18 @@ void __init sme_enable(struct boot_params *bp)
 	unsigned long feature_mask;
 	bool active_by_default;
 	unsigned long me_mask;
+	unsigned long pt_bit_pos;	/* Pagetable bit position */
 	char buffer[16];
 	bool snp;
 	u64 msr;
 
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
+	/* Get the pagetable bit position if SEV or SME are supported */
+	if ((get_pagetable_bit_pos(&pt_bit_pos, AMD_SEV_BIT | AMD_SME_BIT)) < 0)
 		return;
 
-#define AMD_SME_BIT	BIT(0)
-#define AMD_SEV_BIT	BIT(1)
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV or SME is supported */
-	if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
-		return;
-
-	me_mask = 1UL << (ebx & 0x3f);
+	me_mask = BIT_ULL(pt_bit_pos);
 
 	/* Check the SEV MSR whether SEV or SME is enabled */
 	sev_status   = __rdmsr(MSR_AMD64_SEV);
Borislav Petkov Dec. 15, 2021, 11:57 a.m. UTC | #10
On Tue, Dec 14, 2021 at 06:14:34PM -0600, Venu Busireddy wrote:
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index 2c5f12ae7d04..41b096f28d02 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -224,6 +224,43 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
>  	    : "memory");
>  }
>  
> +/*
> + * Returns the pagetable bit position in pt_bit_pos,
> + * iff the specified features are supported.
> + */
> +static inline int get_pagetable_bit_pos(unsigned long *pt_bit_pos,
> +					unsigned long features)

You can simply return pt_bit_pos:

static inline unsigned int get_pagetable_bit_pos(unsigned long features)

and return a negative value on error.

Also, the only duplication this is saving is visual - that function will
get inlined at the call sites.

Also, I'd love to separate the compressed kernel headers from the
kernel proper ones but I'm afraid that ship has sailed. But if I could,
that would have to be in a special header that gets included by both
stages...

So I don't mind this but I'd let Brijesh and Tom have a look at it too.

Thx.
Tom Lendacky Dec. 15, 2021, 2:43 p.m. UTC | #11
On 12/14/21 6:14 PM, Venu Busireddy wrote:
> On 2021-12-14 20:10:16 +0100, Borislav Petkov wrote:
>> On Tue, Dec 14, 2021 at 11:46:14AM -0600, Venu Busireddy wrote:
>>> What I am suggesting should not have anything to do with the boot stage
>>> of the kernel.
>>
>> I know exactly what you're suggesting.
>>
>>> For example, both these functions call native_cpuid(), which is declared
>>> as an inline function. I am merely suggesting to do something similar
>>> to avoid the code duplication.
>>
>> Try it yourself. If you can come up with something halfway readable and
>> it builds, I'm willing to take a look.
> 
> Patch (to be applied on top of sev-snp-v8 branch of
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux.git&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7Cbff83ee03b1147c39ea808d9bf5fe9d8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637751240978266883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=D8t%2FwXY%2FYIl8aJXN%2BU7%2Flubln8AbhtdgB0f4DCNWp4w%3D&amp;reserved=0) is attached at the end.
> 
> Here are a few things I did.
> 
> 1. Moved all the common code that existed at the begining of
>     sme_enable() and sev_enable() to an inline function named
>     get_pagetable_bit_pos().
> 2. sme_enable() was using AMD_SME_BIT and AMD_SEV_BIT, whereas
>     sev_enable() was dealing with raw bits. Moved those definitions to
>     sev.h, and changed sev_enable() to use those definitions.
> 3. Make consistent use of BIT_ULL.
> 
> Venu
> 
> 
> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index c2bf99522e5e..b44d6b37796e 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -291,6 +291,7 @@ static void enforce_vmpl0(void)
>   void sev_enable(struct boot_params *bp)
>   {
>   	unsigned int eax, ebx, ecx, edx;
> +	unsigned long pt_bit_pos;	/* Pagetable bit position */
>   	bool snp;
>   
>   	/*
> @@ -299,26 +300,8 @@ void sev_enable(struct boot_params *bp)
>   	 */
>   	snp = snp_init(bp);
>   
> -	/* Check for the SME/SEV support leaf */
> -	eax = 0x80000000;
> -	ecx = 0;
> -	native_cpuid(&eax, &ebx, &ecx, &edx);
> -	if (eax < 0x8000001f)
> -		return;
> -
> -	/*
> -	 * Check for the SME/SEV feature:
> -	 *   CPUID Fn8000_001F[EAX]
> -	 *   - Bit 0 - Secure Memory Encryption support
> -	 *   - Bit 1 - Secure Encrypted Virtualization support
> -	 *   CPUID Fn8000_001F[EBX]
> -	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> -	 */
> -	eax = 0x8000001f;
> -	ecx = 0;
> -	native_cpuid(&eax, &ebx, &ecx, &edx);
> -	/* Check whether SEV is supported */
> -	if (!(eax & BIT(1))) {
> +	/* Get the pagetable bit position if SEV is supported */
> +	if ((get_pagetable_bit_pos(&pt_bit_pos, AMD_SEV_BIT)) < 0) {
>   		if (snp)
>   			error("SEV-SNP support indicated by CC blob, but not CPUID.");
>   		return;
> @@ -350,7 +333,7 @@ void sev_enable(struct boot_params *bp)
>   	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
>   		error("SEV-SNP supported indicated by CC blob, but not SEV status MSR.");
>   
> -	sme_me_mask = BIT_ULL(ebx & 0x3f);
> +	sme_me_mask = BIT_ULL(pt_bit_pos);
>   }
>   
>   /* Search for Confidential Computing blob in the EFI config table. */
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index 2c5f12ae7d04..41b096f28d02 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -224,6 +224,43 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
>   	    : "memory");
>   }
>   
> +/*
> + * Returns the pagetable bit position in pt_bit_pos,
> + * iff the specified features are supported.
> + */
> +static inline int get_pagetable_bit_pos(unsigned long *pt_bit_pos,
> +					unsigned long features)

I'm not a fan of this name. You are specifically returning the encryption 
bit position but using a very generic name of get_pagetable_bit_pos() in a 
very common header file. Maybe something more like get_me_bit() and move 
the function to an existing SEV header file.

Also, this can probably just return an unsigned int that will be either 0 
or the bit position, right?  Then the check above can be for a zero value, 
e.g.:

	me_bit = get_me_bit();
	if (!me_bit) {

	...

	sme_me_mask = BIT_ULL(me_bit);

That should work below, too, but you'll need to verify that.

> +{
> +	unsigned int eax, ebx, ecx, edx;
> +
> +	/* Check for the SME/SEV support leaf */
> +	eax = 0x80000000;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +	if (eax < 0x8000001f)
> +		return -1;

This can then be:

		return 0;

> +
> +	eax = 0x8000001f;
> +	ecx = 0;
> +	native_cpuid(&eax, &ebx, &ecx, &edx);
> +
> +	/* Check whether the specified features are supported.
> +	 * SME/SEV features:
> +	 *   CPUID Fn8000_001F[EAX]
> +	 *   - Bit 0 - Secure Memory Encryption support
> +	 *   - Bit 1 - Secure Encrypted Virtualization support
> +	 */
> +	if (!(eax & features))
> +		return -1;

and this can be:

		return 0;

> +
> +	/*
> +	 *   CPUID Fn8000_001F[EBX]
> +	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> +	 */
> +	*pt_bit_pos = (unsigned long)(ebx & 0x3f);

and this can be:

	return ebx & 0x3f;

> +	return 0;
> +}
> +
>   #define native_cpuid_reg(reg)					\
>   static inline unsigned int native_cpuid_##reg(unsigned int op)	\
>   {								\
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 7a5934af9d47..1a2344362ec6 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -17,6 +17,9 @@
>   #define GHCB_PROTOCOL_MAX	2ULL
>   #define GHCB_DEFAULT_USAGE	0ULL
>   
> +#define AMD_SME_BIT		BIT(0)
> +#define AMD_SEV_BIT		BIT(1)
> +

Maybe this is where that new static inline function should go...

>   #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
>   
>   enum es_result {
> diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
> index 2f723e106ed3..1ef50e969efd 100644
> --- a/arch/x86/mm/mem_encrypt_identity.c
> +++ b/arch/x86/mm/mem_encrypt_identity.c
> @@ -508,38 +508,18 @@ void __init sme_enable(struct boot_params *bp)
>   	unsigned long feature_mask;
>   	bool active_by_default;
>   	unsigned long me_mask;
> +	unsigned long pt_bit_pos;	/* Pagetable bit position */

unsigned int and me_bit or me_bit_pos.

Thanks,
Tom

>   	char buffer[16];
>   	bool snp;
>   	u64 msr;
>   
>   	snp = snp_init(bp);
>   
> -	/* Check for the SME/SEV support leaf */
> -	eax = 0x80000000;
> -	ecx = 0;
> -	native_cpuid(&eax, &ebx, &ecx, &edx);
> -	if (eax < 0x8000001f)
> +	/* Get the pagetable bit position if SEV or SME are supported */
> +	if ((get_pagetable_bit_pos(&pt_bit_pos, AMD_SEV_BIT | AMD_SME_BIT)) < 0)
>   		return;
>   
> -#define AMD_SME_BIT	BIT(0)
> -#define AMD_SEV_BIT	BIT(1)
> -
> -	/*
> -	 * Check for the SME/SEV feature:
> -	 *   CPUID Fn8000_001F[EAX]
> -	 *   - Bit 0 - Secure Memory Encryption support
> -	 *   - Bit 1 - Secure Encrypted Virtualization support
> -	 *   CPUID Fn8000_001F[EBX]
> -	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> -	 */
> -	eax = 0x8000001f;
> -	ecx = 0;
> -	native_cpuid(&eax, &ebx, &ecx, &edx);
> -	/* Check whether SEV or SME is supported */
> -	if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
> -		return;
> -
> -	me_mask = 1UL << (ebx & 0x3f);
> +	me_mask = BIT_ULL(pt_bit_pos);
>   
>   	/* Check the SEV MSR whether SEV or SME is enabled */
>   	sev_status   = __rdmsr(MSR_AMD64_SEV);
>
Michael Roth Dec. 15, 2021, 5:49 p.m. UTC | #12
On Wed, Dec 15, 2021 at 08:43:23AM -0600, Tom Lendacky wrote:
> On 12/14/21 6:14 PM, Venu Busireddy wrote:
> > On 2021-12-14 20:10:16 +0100, Borislav Petkov wrote:
> > > On Tue, Dec 14, 2021 at 11:46:14AM -0600, Venu Busireddy wrote:
> > > > What I am suggesting should not have anything to do with the boot stage
> > > > of the kernel.
> > > 
> > > I know exactly what you're suggesting.
> > > 
> > > > For example, both these functions call native_cpuid(), which is declared
> > > > as an inline function. I am merely suggesting to do something similar
> > > > to avoid the code duplication.
> > > 
> > > Try it yourself. If you can come up with something halfway readable and
> > > it builds, I'm willing to take a look.
> > 
> > Patch (to be applied on top of sev-snp-v8 branch of
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux.git&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7Cbff83ee03b1147c39ea808d9bf5fe9d8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637751240978266883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=D8t%2FwXY%2FYIl8aJXN%2BU7%2Flubln8AbhtdgB0f4DCNWp4w%3D&amp;reserved=0) is attached at the end.
> > 
> > Here are a few things I did.
> > 
> > 1. Moved all the common code that existed at the begining of
> >     sme_enable() and sev_enable() to an inline function named
> >     get_pagetable_bit_pos().
> > 2. sme_enable() was using AMD_SME_BIT and AMD_SEV_BIT, whereas
> >     sev_enable() was dealing with raw bits. Moved those definitions to
> >     sev.h, and changed sev_enable() to use those definitions.
> > 3. Make consistent use of BIT_ULL.
> > 
> > Venu
> > 
> > 
> > diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> > index c2bf99522e5e..b44d6b37796e 100644
> > --- a/arch/x86/boot/compressed/sev.c
> > +++ b/arch/x86/boot/compressed/sev.c
> > @@ -291,6 +291,7 @@ static void enforce_vmpl0(void)
> >   void sev_enable(struct boot_params *bp)
> >   {
> >   	unsigned int eax, ebx, ecx, edx;
> > +	unsigned long pt_bit_pos;	/* Pagetable bit position */
> >   	bool snp;
> >   	/*
> > @@ -299,26 +300,8 @@ void sev_enable(struct boot_params *bp)
> >   	 */
> >   	snp = snp_init(bp);
> > -	/* Check for the SME/SEV support leaf */
> > -	eax = 0x80000000;
> > -	ecx = 0;
> > -	native_cpuid(&eax, &ebx, &ecx, &edx);
> > -	if (eax < 0x8000001f)
> > -		return;
> > -
> > -	/*
> > -	 * Check for the SME/SEV feature:
> > -	 *   CPUID Fn8000_001F[EAX]
> > -	 *   - Bit 0 - Secure Memory Encryption support
> > -	 *   - Bit 1 - Secure Encrypted Virtualization support
> > -	 *   CPUID Fn8000_001F[EBX]
> > -	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
> > -	 */
> > -	eax = 0x8000001f;
> > -	ecx = 0;
> > -	native_cpuid(&eax, &ebx, &ecx, &edx);
> > -	/* Check whether SEV is supported */
> > -	if (!(eax & BIT(1))) {
> > +	/* Get the pagetable bit position if SEV is supported */
> > +	if ((get_pagetable_bit_pos(&pt_bit_pos, AMD_SEV_BIT)) < 0) {
> >   		if (snp)
> >   			error("SEV-SNP support indicated by CC blob, but not CPUID.");
> >   		return;
> > @@ -350,7 +333,7 @@ void sev_enable(struct boot_params *bp)
> >   	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
> >   		error("SEV-SNP supported indicated by CC blob, but not SEV status MSR.");
> > -	sme_me_mask = BIT_ULL(ebx & 0x3f);
> > +	sme_me_mask = BIT_ULL(pt_bit_pos);
> >   }
> >   /* Search for Confidential Computing blob in the EFI config table. */
> > diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> > index 2c5f12ae7d04..41b096f28d02 100644
> > --- a/arch/x86/include/asm/processor.h
> > +++ b/arch/x86/include/asm/processor.h
> > @@ -224,6 +224,43 @@ static inline void native_cpuid(unsigned int *eax, unsigned int *ebx,
> >   	    : "memory");
> >   }
> > +/*
> > + * Returns the pagetable bit position in pt_bit_pos,
> > + * iff the specified features are supported.
> > + */
> > +static inline int get_pagetable_bit_pos(unsigned long *pt_bit_pos,
> > +					unsigned long features)
> 
> I'm not a fan of this name. You are specifically returning the encryption
> bit position but using a very generic name of get_pagetable_bit_pos() in a
> very common header file. Maybe something more like get_me_bit() and move the
> function to an existing SEV header file.
> 
> Also, this can probably just return an unsigned int that will be either 0 or
> the bit position, right?  Then the check above can be for a zero value,
> e.g.:
> 
> 	me_bit = get_me_bit();
> 	if (!me_bit) {
> 
> 	...
> 
> 	sme_me_mask = BIT_ULL(me_bit);
> 
> That should work below, too, but you'll need to verify that.

I think in the greater context of consolidating all the SME/SEV setup
and re-using code, this helper stands a high chance of eventually becoming
something more along the lines of sme_sev_parse_cpuid(), since otherwise
we'd end up re-introducing multiple helpers to parse the same 0x8000001F
fields if we ever need to process any of the other fields advertised in
there. Given that, it makes sense to reserve the return value as an
indication that either SEV or SME are enabled, and then have a
pass-by-pointer parameters list to collect the individual feature
bits/encryption mask for cases where SEV/SME are enabled, which are only
treated as valid if sme_sev_parse_cpuid() returns 0.

So Venu's original approach of passing the encryption mask by pointer
seems a little closer toward that end, but I also agree Tom's approach
is cleaner for the current code base, so I'm fine either way, just
figured I'd mention this.

I think needing to pass in the SME/SEV CPUID bits to tell the helper when
to parse encryption bit and when not to is a little bit awkward though.
If there's some agreement that this will ultimately serve the purpose of
handling all (or most) of SME/SEV-related CPUID parsing, then the caller
shouldn't really need to be aware of any individual bit positions.
Maybe a bool could handle that instead, e.g.:

  int get_me_bit(bool sev_only, ...)

  or

  int sme_sev_parse_cpuid(bool sev_only, ...)

where for boot/compressed sev_only=true, for kernel proper sev_only=false.
Michael Roth Dec. 15, 2021, 5:51 p.m. UTC | #13
On Tue, Dec 14, 2021 at 06:14:34PM -0600, Venu Busireddy wrote:
> On 2021-12-14 20:10:16 +0100, Borislav Petkov wrote:
> > On Tue, Dec 14, 2021 at 11:46:14AM -0600, Venu Busireddy wrote:
> > > What I am suggesting should not have anything to do with the boot stage
> > > of the kernel.
> > 
> > I know exactly what you're suggesting.
> > 
> > > For example, both these functions call native_cpuid(), which is declared
> > > as an inline function. I am merely suggesting to do something similar
> > > to avoid the code duplication.
> > 
> > Try it yourself. If you can come up with something halfway readable and
> > it builds, I'm willing to take a look.
> 
> Patch (to be applied on top of sev-snp-v8 branch of
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAMDESE%2Flinux.git&amp;data=04%7C01%7Cmichael.roth%40amd.com%7Cbff83ee03b1147c39ea808d9bf5fe9d8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637751240979543818%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=DZpgEtthswLhhfWqZlLkHHd5nJW2jb%2FVFuTssAFJ6Uo%3D&amp;reserved=0) is attached at the end.
> 
> Here are a few things I did.
> 
> 1. Moved all the common code that existed at the begining of
>    sme_enable() and sev_enable() to an inline function named
>    get_pagetable_bit_pos().
> 2. sme_enable() was using AMD_SME_BIT and AMD_SEV_BIT, whereas
>    sev_enable() was dealing with raw bits. Moved those definitions to
>    sev.h, and changed sev_enable() to use those definitions.
> 3. Make consistent use of BIT_ULL.

Hi Venu,

I know there's still comments floating around, but once there's consensus feel
free to respond with a separate precursor patch against tip which moves
sme_enable() cpuid code into your helper function, along with your S-o-B, and I
can include it directly in the next version. Otherwise, I can incorporate your
suggestions into the next spin, just let me know if it's okay to add:

  Co-authored-by: Venu Busireddy <venu.busireddy@oracle.com>
  Signed-off-by:  Venu Busireddy <venu.busireddy@oracle.com>

to the relevant commits.

Thank you (and Boris/Tom) for the suggestions!

-Mike

> 
> Venu
>
Venu Busireddy Dec. 15, 2021, 6:17 p.m. UTC | #14
On 2021-12-15 11:49:34 -0600, Michael Roth wrote:
> 
> I think in the greater context of consolidating all the SME/SEV setup
> and re-using code, this helper stands a high chance of eventually becoming
> something more along the lines of sme_sev_parse_cpuid(), since otherwise
> we'd end up re-introducing multiple helpers to parse the same 0x8000001F
> fields if we ever need to process any of the other fields advertised in
> there. Given that, it makes sense to reserve the return value as an
> indication that either SEV or SME are enabled, and then have a
> pass-by-pointer parameters list to collect the individual feature
> bits/encryption mask for cases where SEV/SME are enabled, which are only
> treated as valid if sme_sev_parse_cpuid() returns 0.
> 
> So Venu's original approach of passing the encryption mask by pointer
> seems a little closer toward that end, but I also agree Tom's approach
> is cleaner for the current code base, so I'm fine either way, just
> figured I'd mention this.
> 
> I think needing to pass in the SME/SEV CPUID bits to tell the helper when
> to parse encryption bit and when not to is a little bit awkward though.
> If there's some agreement that this will ultimately serve the purpose of
> handling all (or most) of SME/SEV-related CPUID parsing, then the caller
> shouldn't really need to be aware of any individual bit positions.
> Maybe a bool could handle that instead, e.g.:
> 
>   int get_me_bit(bool sev_only, ...)
> 
>   or
> 
>   int sme_sev_parse_cpuid(bool sev_only, ...)
> 
> where for boot/compressed sev_only=true, for kernel proper sev_only=false.

I can implement it this way too. But I am wondering if having a
boolean argument limits us from handling any future additions to the
bit positions.

Boris & Tom, which implementation would you prefer?

Venu
Borislav Petkov Dec. 15, 2021, 6:33 p.m. UTC | #15
On Wed, Dec 15, 2021 at 12:17:44PM -0600, Venu Busireddy wrote:
> Boris & Tom, which implementation would you prefer?

I'd like to see how that sme_sev_parse_cpuid() would look like. And that
function should be called sev_parse_cpuid(), btw.

Because if that function turns out to be a subset of your suggestion,
functionality-wise, then we should save us the churn and simply do one
generic helper.

Btw 2, that helper should be in arch/x86/kernel/sev-shared.c so that it
gets shared by both kernel stages instead having an inline function in
some random header.

Btw 3, I'm not crazy about the feature testing with the @features param
either. Maybe that function should return the eYx register directly,
like the cpuid_eYx() variants in the kernel do, where Y in { a, b, c, d
}.

The caller can than do its own testing:

	eax = sev_parse_cpuid(RET_EAX, ...)
	if (eax > 0) {
		if (eax & BIT(1))
			...

Something along those lines, for example.

But I'd have to see a concrete diff from Michael to get a better idea
how that CPUID parsing from the CPUID page is going to look like.

Thx.
Venu Busireddy Dec. 15, 2021, 6:58 p.m. UTC | #16
On 2021-12-15 08:43:23 -0600, Tom Lendacky wrote:
> 
> I'm not a fan of this name. You are specifically returning the encryption
> bit position but using a very generic name of get_pagetable_bit_pos() in a
> very common header file. Maybe something more like get_me_bit() and move the
> function to an existing SEV header file.
> 
> Also, this can probably just return an unsigned int that will be either 0 or
> the bit position, right?  Then the check above can be for a zero value,
> e.g.:
> 
> 	me_bit = get_me_bit();
> 	if (!me_bit) {
> 
> 	...
> 
> 	sme_me_mask = BIT_ULL(me_bit);
> 
> That should work below, too, but you'll need to verify that.
> 

Implemented the changes as you suggested. Patch attached below. Will
submit another if we reach a different consensus.

Venu

---
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 7a5934af9d47..f0d5a00e490d 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -17,6 +17,45 @@
 #define GHCB_PROTOCOL_MAX	2ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
+#define AMD_SME_BIT		BIT(0)
+#define AMD_SEV_BIT		BIT(1)
+
+/*
+ * Returns the memory encryption bit position,
+ * if the specified features are supported.
+ * Returns 0, otherwise.
+ */
+static inline unsigned int get_me_bit_pos(unsigned long features)
+{
+	unsigned int eax, ebx, ecx, edx;
+
+	/* Check for the SME/SEV support leaf */
+	eax = 0x80000000;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	if (eax < 0x8000001f)
+		return 0;
+
+	eax = 0x8000001f;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+
+	/* Check whether the specified features are supported.
+	 * SME/SEV features:
+	 *   CPUID Fn8000_001F[EAX]
+	 *   - Bit 0 - Secure Memory Encryption support
+	 *   - Bit 1 - Secure Encrypted Virtualization support
+	 */
+	if (!(eax & features))
+		return 0;
+
+	/*
+	 *   CPUID Fn8000_001F[EBX]
+	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
+	 */
+	return ebx & 0x3f;
+}
+
 #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
 
 enum es_result {
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index c2bf99522e5e..838c383f102b 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -291,6 +291,7 @@ static void enforce_vmpl0(void)
 void sev_enable(struct boot_params *bp)
 {
 	unsigned int eax, ebx, ecx, edx;
+	unsigned int me_bit_pos;
 	bool snp;
 
 	/*
@@ -299,26 +300,9 @@ void sev_enable(struct boot_params *bp)
 	 */
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
-		return;
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV is supported */
-	if (!(eax & BIT(1))) {
+	/* Get the memory encryption bit position if SEV is supported */
+	me_bit_pos = get_me_bit_pos(AMD_SEV_BIT);
+	if (!me_bit_pos) {
 		if (snp)
 			error("SEV-SNP support indicated by CC blob, but not CPUID.");
 		return;
@@ -350,7 +334,7 @@ void sev_enable(struct boot_params *bp)
 	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
 		error("SEV-SNP supported indicated by CC blob, but not SEV status MSR.");
 
-	sme_me_mask = BIT_ULL(ebx & 0x3f);
+	sme_me_mask = BIT_ULL(me_bit_pos);
 }
 
 /* Search for Confidential Computing blob in the EFI config table. */
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 2f723e106ed3..57bc77382288 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -508,38 +508,19 @@ void __init sme_enable(struct boot_params *bp)
 	unsigned long feature_mask;
 	bool active_by_default;
 	unsigned long me_mask;
+	unsigned int me_bit_pos;
 	char buffer[16];
 	bool snp;
 	u64 msr;
 
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
+	/* Get the memory encryption bit position if SEV or SME are supported */
+	me_bit_pos = get_me_bit_pos(AMD_SEV_BIT | AMD_SME_BIT);
+	if (!me_bit_pos)
 		return;
 
-#define AMD_SME_BIT	BIT(0)
-#define AMD_SEV_BIT	BIT(1)
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV or SME is supported */
-	if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
-		return;
-
-	me_mask = 1UL << (ebx & 0x3f);
+	me_mask = BIT_ULL(me_bit_pos);
 
 	/* Check the SEV MSR whether SEV or SME is enabled */
 	sev_status   = __rdmsr(MSR_AMD64_SEV);
Venu Busireddy Dec. 15, 2021, 7:54 p.m. UTC | #17
On 2021-12-15 11:49:34 -0600, Michael Roth wrote:
> 
> I think needing to pass in the SME/SEV CPUID bits to tell the helper when
> to parse encryption bit and when not to is a little bit awkward though.
> If there's some agreement that this will ultimately serve the purpose of
> handling all (or most) of SME/SEV-related CPUID parsing, then the caller
> shouldn't really need to be aware of any individual bit positions.
> Maybe a bool could handle that instead, e.g.:
> 
>   int get_me_bit(bool sev_only, ...)
> 
>   or
> 
>   int sme_sev_parse_cpuid(bool sev_only, ...)
> 
> where for boot/compressed sev_only=true, for kernel proper sev_only=false.

Implemented using this suggestion, and the patch is at the end.

I feel that passing of "true" or "false" to get_me_bit_pos() from
sev_enable() and sme_enable() has become less clear now. It is not
obvious what the "true" and "false" values mean.

However, both implementations (Tom's suggestions and Tom's + Mike's
suggestions) are available now. We can pick one of these, or I will redo
this if we want a different implementation.

Venu

---
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 7a5934af9d47..eb202096a1fc 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -17,6 +17,48 @@
 #define GHCB_PROTOCOL_MAX	2ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
+#define AMD_SME_BIT		BIT(0)
+#define AMD_SEV_BIT		BIT(1)
+
+/*
+ * Returns the memory encryption bit position,
+ * if the specified features are supported.
+ * Returns 0, otherwise.
+ */
+static inline unsigned int get_me_bit_pos(bool sev_only)
+{
+	unsigned int eax, ebx, ecx, edx;
+	unsigned int features;
+
+	features = AMD_SEV_BIT | (sev_only ? 0 : AMD_SME_BIT);
+
+	/* Check for the SME/SEV support leaf */
+	eax = 0x80000000;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	if (eax < 0x8000001f)
+		return 0;
+
+	eax = 0x8000001f;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+
+	/* Check whether the specified features are supported.
+	 * SME/SEV features:
+	 *   CPUID Fn8000_001F[EAX]
+	 *   - Bit 0 - Secure Memory Encryption support
+	 *   - Bit 1 - Secure Encrypted Virtualization support
+	 */
+	if (!(eax & features))
+		return 0;
+
+	/*
+	 *   CPUID Fn8000_001F[EBX]
+	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
+	 */
+	return ebx & 0x3f;
+}
+
 #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
 
 enum es_result {
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index c2bf99522e5e..9a8181893af7 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -291,6 +291,7 @@ static void enforce_vmpl0(void)
 void sev_enable(struct boot_params *bp)
 {
 	unsigned int eax, ebx, ecx, edx;
+	unsigned int me_bit_pos;
 	bool snp;
 
 	/*
@@ -299,26 +300,9 @@ void sev_enable(struct boot_params *bp)
 	 */
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
-		return;
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV is supported */
-	if (!(eax & BIT(1))) {
+	/* Get the memory encryption bit position if SEV is supported */
+	me_bit_pos = get_me_bit_pos(true);
+	if (!me_bit_pos) {
 		if (snp)
 			error("SEV-SNP support indicated by CC blob, but not CPUID.");
 		return;
@@ -350,7 +334,7 @@ void sev_enable(struct boot_params *bp)
 	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
 		error("SEV-SNP supported indicated by CC blob, but not SEV status MSR.");
 
-	sme_me_mask = BIT_ULL(ebx & 0x3f);
+	sme_me_mask = BIT_ULL(me_bit_pos);
 }
 
 /* Search for Confidential Computing blob in the EFI config table. */
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 2f723e106ed3..a4979f61ecc7 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -508,38 +508,19 @@ void __init sme_enable(struct boot_params *bp)
 	unsigned long feature_mask;
 	bool active_by_default;
 	unsigned long me_mask;
+	unsigned int me_bit_pos;
 	char buffer[16];
 	bool snp;
 	u64 msr;
 
 	snp = snp_init(bp);
 
-	/* Check for the SME/SEV support leaf */
-	eax = 0x80000000;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	if (eax < 0x8000001f)
+	/* Get the memory encryption bit position if SEV or SME are supported */
+	me_bit_pos = get_me_bit_pos(false);
+	if (!me_bit_pos)
 		return;
 
-#define AMD_SME_BIT	BIT(0)
-#define AMD_SEV_BIT	BIT(1)
-
-	/*
-	 * Check for the SME/SEV feature:
-	 *   CPUID Fn8000_001F[EAX]
-	 *   - Bit 0 - Secure Memory Encryption support
-	 *   - Bit 1 - Secure Encrypted Virtualization support
-	 *   CPUID Fn8000_001F[EBX]
-	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
-	 */
-	eax = 0x8000001f;
-	ecx = 0;
-	native_cpuid(&eax, &ebx, &ecx, &edx);
-	/* Check whether SEV or SME is supported */
-	if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
-		return;
-
-	me_mask = 1UL << (ebx & 0x3f);
+	me_mask = BIT_ULL(me_bit_pos);
 
 	/* Check the SEV MSR whether SEV or SME is enabled */
 	sev_status   = __rdmsr(MSR_AMD64_SEV);
Michael Roth Dec. 15, 2021, 8:17 p.m. UTC | #18
On Wed, Dec 15, 2021 at 07:33:47PM +0100, Borislav Petkov wrote:
> On Wed, Dec 15, 2021 at 12:17:44PM -0600, Venu Busireddy wrote:
> > Boris & Tom, which implementation would you prefer?
> 
> I'd like to see how that sme_sev_parse_cpuid() would look like. And that
> function should be called sev_parse_cpuid(), btw.
> 
> Because if that function turns out to be a subset of your suggestion,
> functionality-wise, then we should save us the churn and simply do one
> generic helper.

I was actually thinking this proposed sev_parse_cpuid() helper would be
a superset of what Venu currently has implemented. E.g. Venu's most recent
patch does:

sev_enable():
  unsigned int me_bit_pos;

  me_bit_pos = get_me_bit(AMD_SEV_BIT)
  if (!me_bit_pos)
    return;

  ...

Let's say in the future there's need to also grab say, the VTE bit. We
could introduce a new helper, get_vte_bit() that re-does all the
0x80000000-0x8000001F range checks, some sanity checks that SEV is set if
VTE bit is set, and then now have a nice single-purpose helper that
duplicates similar checks in get_me_bit(), or we could avoid the
duplication by expanding get_me_bit() so it could be used something like:

  me_bit_pos = get_me_bit(AMD_SEV_BIT, &vte_enabled)

at which point it makes more sense to just have it be a more generic
helper, called via:

  ret = sev_parse_cpuid(AMD_SEV_BIT, &me_bit_pos, &vte_enabled)

i.e. Venu's original patch basically, but with the helper function
renamed.

and if fields are added in the future:

  sev_parse_cpuid(AMD_SEV_BIT, &me_bit_pos, &vte_enabled, &new_feature_enabled, etc..)

or if that eventually becomes unwieldly it could later be changed to return
a feature mask.

> 
> Btw 2, that helper should be in arch/x86/kernel/sev-shared.c so that it
> gets shared by both kernel stages instead having an inline function in
> some random header.
> 
> Btw 3, I'm not crazy about the feature testing with the @features param
> either. Maybe that function should return the eYx register directly,
> like the cpuid_eYx() variants in the kernel do, where Y in { a, b, c, d
> }.
> 
> The caller can than do its own testing:
> 
> 	eax = sev_parse_cpuid(RET_EAX, ...)
> 	if (eax > 0) {
> 		if (eax & BIT(1))
> 			...
> 
> Something along those lines, for example.

I think having sev_parse_cpuid() using a more "human-readable" format
for reporting features/fields will make it easier to abstract away the
nitty-gritty details and reduce that chances for more duplication
between boot/compressed and kernel proper in the future. That
"human-readable" format could be in the form of a boolean/int
parameter list that gets expanded over time as needed (like the above
examples), or a higher-level construct like a struct/bitmask/etc. But
either way it would be nice to only have to think about specific CPUID
bits when looking at sev_parse_cpuid(), and have callers instead rely
purely on the sev_parse_cpuid() function prototype/documentation to
know what's going on.

> 
> But I'd have to see a concrete diff from Michael to get a better idea
> how that CPUID parsing from the CPUID page is going to look like.

It should look the same with/without CPUID page, since the CPUID page
will have already been set up early in sev_enable()/sme_enable() based
on the presence of the CC blob via snp_init(), introduced in:

 [PATCH v8 31/40] x86/compressed: add SEV-SNP feature detection/setup

Thanks,

Mike

> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7C6a28b961ef1441ed08f908d9bff970ea%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637751900351173552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=nnCrpsw9%2FYlmhK1Xbx5y5vUScVsEOQeU%2F%2FTCmBMQ3v4%3D&amp;reserved=0
Borislav Petkov Dec. 15, 2021, 8:38 p.m. UTC | #19
On Wed, Dec 15, 2021 at 02:17:34PM -0600, Michael Roth wrote:
> and if fields are added in the future:
> 
>   sev_parse_cpuid(AMD_SEV_BIT, &me_bit_pos, &vte_enabled, &new_feature_enabled, etc..)

And that will end up being a vararg function because of who knows what
other feature bits will have to get passed in? You have even added the
ellipsis in there.

Nope. Definitely not.

> or if that eventually becomes unwieldly 

The above example is already unwieldy.

> it could later be changed to return a feature mask.

Yes, that. Clean and simple.

But it is hard to discuss anything without patches so we can continue
the topic with concrete patches. But this unification is not
super-pressing so it can go ontop of the SNP pile.

Thx.
Michael Roth Dec. 15, 2021, 8:43 p.m. UTC | #20
On Wed, Dec 15, 2021 at 12:17:44PM -0600, Venu Busireddy wrote:
> On 2021-12-15 11:49:34 -0600, Michael Roth wrote:
> > 
> > I think in the greater context of consolidating all the SME/SEV setup
> > and re-using code, this helper stands a high chance of eventually becoming
> > something more along the lines of sme_sev_parse_cpuid(), since otherwise
> > we'd end up re-introducing multiple helpers to parse the same 0x8000001F
> > fields if we ever need to process any of the other fields advertised in
> > there. Given that, it makes sense to reserve the return value as an
> > indication that either SEV or SME are enabled, and then have a
> > pass-by-pointer parameters list to collect the individual feature
> > bits/encryption mask for cases where SEV/SME are enabled, which are only
> > treated as valid if sme_sev_parse_cpuid() returns 0.
> > 
> > So Venu's original approach of passing the encryption mask by pointer
> > seems a little closer toward that end, but I also agree Tom's approach
> > is cleaner for the current code base, so I'm fine either way, just
> > figured I'd mention this.
> > 
> > I think needing to pass in the SME/SEV CPUID bits to tell the helper when
> > to parse encryption bit and when not to is a little bit awkward though.
> > If there's some agreement that this will ultimately serve the purpose of
> > handling all (or most) of SME/SEV-related CPUID parsing, then the caller
> > shouldn't really need to be aware of any individual bit positions.
> > Maybe a bool could handle that instead, e.g.:
> > 
> >   int get_me_bit(bool sev_only, ...)
> > 
> >   or
> > 
> >   int sme_sev_parse_cpuid(bool sev_only, ...)
> > 
> > where for boot/compressed sev_only=true, for kernel proper sev_only=false.
> 
> I can implement it this way too. But I am wondering if having a
> boolean argument limits us from handling any future additions to the
> bit positions.

That's the thing, we'll pretty much always want to parse cpuid in
boot/compressed if SEV is enabled, and in kernel proper if either SEV or
SME are enabled, because they both require, at a minimum, the c-bit
position. Extensions to either SEV/SME likely won't change this, but by
using CPUID feature masks to handle this it gives the impression that
this helper relies on individual features being present in the mask in
order for the corresponding fields to be parsed, when in reality it
boils down more to SEV features needing to be enabled earlier because
they don't trust the host during early boot.

I agree the boolean flag makes things a bit less readable without
checking the function prototype though. I was going to suggest 2
separate functions that use a common helper and hide away the
boolean, e.g:

  sev_parse_cpuid() //sev-only

and

  sme_parse_cpuid() //sev or sme

but the latter maybe is a bit misleading and I couldn't think of a
better name. It's really more like sev_sme_parse_cpuid(), but I'm
not sure that will fly. Maybe sme_parse_cpuid() is fine.

You could also just have it take an enum as the first arg though:

enum sev_parse_cpuid {
    SEV_PARSE_CPUID_SEV_ONLY = 0
    SEV_PARSE_CPUID_SME_ONLY //unused
    SEV_PARSE_CPUID_BOTH
}

Personally I still prefer the boolean but just some alternatives
you could consider otherwise.

> 
> Boris & Tom, which implementation would you prefer?
> 
> Venu
> 
>
Michael Roth Dec. 15, 2021, 9:22 p.m. UTC | #21
On Wed, Dec 15, 2021 at 09:38:55PM +0100, Borislav Petkov wrote:
> On Wed, Dec 15, 2021 at 02:17:34PM -0600, Michael Roth wrote:
> > and if fields are added in the future:
> > 
> >   sev_parse_cpuid(AMD_SEV_BIT, &me_bit_pos, &vte_enabled, &new_feature_enabled, etc..)
> 
> And that will end up being a vararg function because of who knows what
> other feature bits will have to get passed in? You have even added the
> ellipsis in there.

Well, not varargs, just sort of anticipating how the function prototype
might change over time as it's modified to parse for new features.

> 
> Nope. Definitely not.
> 
> > or if that eventually becomes unwieldly 
> 
> The above example is already unwieldy.
> 
> > it could later be changed to return a feature mask.
> 
> Yes, that. Clean and simple.
> 
> But it is hard to discuss anything without patches so we can continue
> the topic with concrete patches. But this unification is not
> super-pressing so it can go ontop of the SNP pile.

Yah, it's all theoretical at this point. Didn't mean to derail things
though. I mainly brought it up to suggest that Venu's original approach of
returning the encryption bit via a pointer argument might make it easier to
expand it for other purposes in the future, and that naming it for that
future purpose might encourage future developers to focus their efforts
there instead of potentially re-introducing duplicate code.

But either way it's simple enough to rework things when we actually
cross that bridge. So totally fine with saving all of this as a future
follow-up, or picking up either of Venu's patches for now if you'd still
prefer.

Thanks,

Mike

> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&amp;data=04%7C01%7Cmichael.roth%40amd.com%7C10261dab334649b4b81408d9c00aec95%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637751975466658716%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=E3prWlptt32G%2FsgFg9wU8cMKec2cHywgNm1pPL3jzcI%3D&amp;reserved=0
Venu Busireddy Jan. 3, 2022, 7:10 p.m. UTC | #22
On 2021-12-15 15:22:57 -0600, Michael Roth wrote:
> On Wed, Dec 15, 2021 at 09:38:55PM +0100, Borislav Petkov wrote:
> > 
> > But it is hard to discuss anything without patches so we can continue
> > the topic with concrete patches. But this unification is not
> > super-pressing so it can go ontop of the SNP pile.
> 
> Yah, it's all theoretical at this point. Didn't mean to derail things
> though. I mainly brought it up to suggest that Venu's original approach of
> returning the encryption bit via a pointer argument might make it easier to
> expand it for other purposes in the future, and that naming it for that
> future purpose might encourage future developers to focus their efforts
> there instead of potentially re-introducing duplicate code.
> 
> But either way it's simple enough to rework things when we actually
> cross that bridge. So totally fine with saving all of this as a future
> follow-up, or picking up either of Venu's patches for now if you'd still
> prefer.

So, what is the consensus? Do you want me to submit a patch after the
SNP changes go upstream? Or, do you want to roll in one of the patches
that I posted earlier?

Venu

>
Brijesh Singh Jan. 5, 2022, 7:34 p.m. UTC | #23
On 1/3/22 1:10 PM, Venu Busireddy wrote:
> On 2021-12-15 15:22:57 -0600, Michael Roth wrote:
>> On Wed, Dec 15, 2021 at 09:38:55PM +0100, Borislav Petkov wrote:
>>>
>>> But it is hard to discuss anything without patches so we can continue
>>> the topic with concrete patches. But this unification is not
>>> super-pressing so it can go ontop of the SNP pile.
>>
>> Yah, it's all theoretical at this point. Didn't mean to derail things
>> though. I mainly brought it up to suggest that Venu's original approach of
>> returning the encryption bit via a pointer argument might make it easier to
>> expand it for other purposes in the future, and that naming it for that
>> future purpose might encourage future developers to focus their efforts
>> there instead of potentially re-introducing duplicate code.
>>
>> But either way it's simple enough to rework things when we actually
>> cross that bridge. So totally fine with saving all of this as a future
>> follow-up, or picking up either of Venu's patches for now if you'd still
>> prefer.
> 
> So, what is the consensus? Do you want me to submit a patch after the
> SNP changes go upstream? Or, do you want to roll in one of the patches
> that I posted earlier?
> 

Will incorporate your changes in v9. And will see what others say about it.

-Brijesh
Brijesh Singh Jan. 10, 2022, 8:46 p.m. UTC | #24
Hi Venu,

On 1/5/22 1:34 PM, Brijesh Singh wrote:
> 
> 
> On 1/3/22 1:10 PM, Venu Busireddy wrote:
>> On 2021-12-15 15:22:57 -0600, Michael Roth wrote:
>>> On Wed, Dec 15, 2021 at 09:38:55PM +0100, Borislav Petkov wrote:
>>>>
>>>> But it is hard to discuss anything without patches so we can continue
>>>> the topic with concrete patches. But this unification is not
>>>> super-pressing so it can go ontop of the SNP pile.
>>>
>>> Yah, it's all theoretical at this point. Didn't mean to derail things
>>> though. I mainly brought it up to suggest that Venu's original 
>>> approach of
>>> returning the encryption bit via a pointer argument might make it 
>>> easier to
>>> expand it for other purposes in the future, and that naming it for that
>>> future purpose might encourage future developers to focus their efforts
>>> there instead of potentially re-introducing duplicate code.
>>>
>>> But either way it's simple enough to rework things when we actually
>>> cross that bridge. So totally fine with saving all of this as a future
>>> follow-up, or picking up either of Venu's patches for now if you'd still
>>> prefer.
>>
>> So, what is the consensus? Do you want me to submit a patch after the
>> SNP changes go upstream? Or, do you want to roll in one of the patches
>> that I posted earlier?
>>
> 
> Will incorporate your changes in v9. And will see what others say about it.
> 

Now that I am incorporating the feedback in my wip branch, at this time 
I am dropping your cleanup mainly because some of recommendation may 
require more rework down the line; you can submit your recommendation as 
cleanup after the patches are in. I hope this is okay with you.

thanks
Venu Busireddy Jan. 10, 2022, 9:17 p.m. UTC | #25
On 2022-01-10 14:46:27 -0600, Brijesh Singh wrote:
> Hi Venu,
> 
> On 1/5/22 1:34 PM, Brijesh Singh wrote:
> > 
> > 
> > On 1/3/22 1:10 PM, Venu Busireddy wrote:
> > > On 2021-12-15 15:22:57 -0600, Michael Roth wrote:
> > > > On Wed, Dec 15, 2021 at 09:38:55PM +0100, Borislav Petkov wrote:
> > > > > 
> > > > > But it is hard to discuss anything without patches so we can continue
> > > > > the topic with concrete patches. But this unification is not
> > > > > super-pressing so it can go ontop of the SNP pile.
> > > > 
> > > > Yah, it's all theoretical at this point. Didn't mean to derail things
> > > > though. I mainly brought it up to suggest that Venu's original
> > > > approach of
> > > > returning the encryption bit via a pointer argument might make
> > > > it easier to
> > > > expand it for other purposes in the future, and that naming it for that
> > > > future purpose might encourage future developers to focus their efforts
> > > > there instead of potentially re-introducing duplicate code.
> > > > 
> > > > But either way it's simple enough to rework things when we actually
> > > > cross that bridge. So totally fine with saving all of this as a future
> > > > follow-up, or picking up either of Venu's patches for now if you'd still
> > > > prefer.
> > > 
> > > So, what is the consensus? Do you want me to submit a patch after the
> > > SNP changes go upstream? Or, do you want to roll in one of the patches
> > > that I posted earlier?
> > > 
> > 
> > Will incorporate your changes in v9. And will see what others say about it.
> > 
> 
> Now that I am incorporating the feedback in my wip branch, at this time I am
> dropping your cleanup mainly because some of recommendation may require more
> rework down the line; you can submit your recommendation as cleanup after
> the patches are in. I hope this is okay with you.

Can't we do that rework (if any) as and when it is needed? I am worried
that we will never get this in!

Venu
Borislav Petkov Jan. 10, 2022, 9:38 p.m. UTC | #26
On Mon, Jan 10, 2022 at 03:17:56PM -0600, Venu Busireddy wrote:
> Can't we do that rework (if any) as and when it is needed? I am worried
> that we will never get this in!

In case you've missed it from a previous mail on that same thread:

"But this unification is not super-pressing so it can go ontop of the
SNP pile."

So such cleanups go ontop, when the dust settles and when we realize
that there really are parts which can be unified. Right now, everything
is moving so first things first.

Thx.
diff mbox series

Patch

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 572c535cf45b..20b174adca51 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -191,9 +191,8 @@  SYM_FUNC_START(startup_32)
 	/*
 	 * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
 	 * will do a check. The sev_status memory will be fully initialized
-	 * with the contents of MSR_AMD_SEV_STATUS later in
-	 * set_sev_encryption_mask(). For now it is sufficient to know that SEV
-	 * is active.
+	 * with the contents of MSR_AMD_SEV_STATUS later via sev_enable(). For
+	 * now it is sufficient to know that SEV is active.
 	 */
 	movl	$1, rva(sev_status)(%ebp)
 1:
@@ -447,6 +446,23 @@  SYM_CODE_START(startup_64)
 	call	load_stage1_idt
 	popq	%rsi
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/*
+	 * Now that the stage1 interrupt handlers are set up, #VC exceptions from
+	 * CPUID instructions can be properly handled for SEV-ES guests.
+	 *
+	 * For SEV-SNP, the CPUID table also needs to be set up in advance of any
+	 * CPUID instructions being issued, so go ahead and do that now via
+	 * sev_enable(), which will also handle the rest of the SEV-related
+	 * detection/setup to ensure that has been done in advance of any dependent
+	 * code.
+	 */
+	pushq	%rsi
+	movq	%rsi, %rdi		/* real mode address */
+	call	sev_enable
+	popq	%rsi
+#endif
+
 	/*
 	 * paging_prepare() sets up the trampoline and checks if we need to
 	 * enable 5-level paging.
@@ -559,17 +575,7 @@  SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
 	shrq	$3, %rcx
 	rep	stosq
 
-/*
- * If running as an SEV guest, the encryption mask is required in the
- * page-table setup code below. When the guest also has SEV-ES enabled
- * set_sev_encryption_mask() will cause #VC exceptions, but the stage2
- * handler can't map its GHCB because the page-table is not set up yet.
- * So set up the encryption mask here while still on the stage1 #VC
- * handler. Then load stage2 IDT and switch to the kernel's own
- * page-table.
- */
 	pushq	%rsi
-	call	set_sev_encryption_mask
 	call	load_stage2_idt
 
 	/* Pass boot_params to initialize_identity_maps() */
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index c1e81a848b2a..311d40f35a4b 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -187,42 +187,6 @@  SYM_CODE_END(startup32_vc_handler)
 	.code64
 
 #include "../../kernel/sev_verify_cbit.S"
-SYM_FUNC_START(set_sev_encryption_mask)
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-	push	%rbp
-	push	%rdx
-
-	movq	%rsp, %rbp		/* Save current stack pointer */
-
-	call	get_sev_encryption_bit	/* Get the encryption bit position */
-	testl	%eax, %eax
-	jz	.Lno_sev_mask
-
-	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
-
-	/*
-	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
-	 * get_sev_encryption_bit() because this function is 32-bit code and
-	 * shared between 64-bit and 32-bit boot path.
-	 */
-	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
-	rdmsr
-
-	/* Store MSR value in sev_status */
-	shlq	$32, %rdx
-	orq	%rdx, %rax
-	movq	%rax, sev_status(%rip)
-
-.Lno_sev_mask:
-	movq	%rbp, %rsp		/* Restore original stack pointer */
-
-	pop	%rdx
-	pop	%rbp
-#endif
-
-	xor	%rax, %rax
-	ret
-SYM_FUNC_END(set_sev_encryption_mask)
 
 	.data
 
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 16ed360b6692..23e0e395084a 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -120,12 +120,12 @@  static inline void console_init(void)
 { }
 #endif
 
-void set_sev_encryption_mask(void);
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+void sev_enable(struct boot_params *bp);
 void sev_es_shutdown_ghcb(void);
 extern bool sev_es_check_ghcb_fault(unsigned long address);
 #else
+static inline void sev_enable(struct boot_params *bp) { }
 static inline void sev_es_shutdown_ghcb(void) { }
 static inline bool sev_es_check_ghcb_fault(unsigned long address)
 {
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 28bcf04c022e..8eebdf589a90 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -204,3 +204,48 @@  void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	else if (result != ES_RETRY)
 		sev_es_terminate(GHCB_SEV_ES_GEN_REQ);
 }
+
+static inline u64 rd_sev_status_msr(void)
+{
+	unsigned long low, high;
+
+	asm volatile("rdmsr" : "=a" (low), "=d" (high) :
+			"c" (MSR_AMD64_SEV));
+
+	return ((high << 32) | low);
+}
+
+void sev_enable(struct boot_params *bp)
+{
+	unsigned int eax, ebx, ecx, edx;
+
+	/* Check for the SME/SEV support leaf */
+	eax = 0x80000000;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	if (eax < 0x8000001f)
+		return;
+
+	/*
+	 * Check for the SME/SEV feature:
+	 *   CPUID Fn8000_001F[EAX]
+	 *   - Bit 0 - Secure Memory Encryption support
+	 *   - Bit 1 - Secure Encrypted Virtualization support
+	 *   CPUID Fn8000_001F[EBX]
+	 *   - Bits 5:0 - Pagetable bit position used to indicate encryption
+	 */
+	eax = 0x8000001f;
+	ecx = 0;
+	native_cpuid(&eax, &ebx, &ecx, &edx);
+	/* Check whether SEV is supported */
+	if (!(eax & BIT(1)))
+		return;
+
+	/* Set the SME mask if this is an SEV guest. */
+	sev_status   = rd_sev_status_msr();
+
+	if (!(sev_status & MSR_AMD64_SEV_ENABLED))
+		return;
+
+	sme_me_mask = BIT_ULL(ebx & 0x3f);
+}