diff mbox

x86/CPU/AMD, mm: Extend with mem_encrypt=sme option

Message ID 20171001194509.4187-1-brijesh.singh@amd.com (mailing list archive)
State New, archived
Headers show

Commit Message

Brijesh Singh Oct. 1, 2017, 7:45 p.m. UTC
>
> So I want to be able to disable SEV and the whole code that comes with
> it in the *host*.

We can add a new variable 'sme_only'. By default this variable should be set
to false. When mem_encrypt=sme is passed then set it to true and
based on sme_only state early_detect_mem_encrypt() can clear X86_FEATURE_SEV
flag.

Here are the changes on top of your patch. I did a quick test in both host and
guest OS and it seems to be working okay. In host OS mem_encrypt=sme disabled
the SEV but in guest its still don't care. I will do more test later...

Comments

Borislav Petkov Oct. 1, 2017, 10:02 p.m. UTC | #1
On Sun, Oct 01, 2017 at 02:45:09PM -0500, Brijesh Singh wrote:
> >
> > So I want to be able to disable SEV and the whole code that comes with
> > it in the *host*.
> 
> We can add a new variable 'sme_only'. By default this variable should be set
> to false. When mem_encrypt=sme is passed then set it to true and
> based on sme_only state early_detect_mem_encrypt() can clear X86_FEATURE_SEV
> flag.

Why would you need yet another variable? We have sev_enabled already?!?
Brijesh Singh Oct. 2, 2017, 11:32 a.m. UTC | #2
On 10/1/17 5:02 PM, Borislav Petkov wrote:
> On Sun, Oct 01, 2017 at 02:45:09PM -0500, Brijesh Singh wrote:
>>> So I want to be able to disable SEV and the whole code that comes with
>>> it in the *host*.
>> We can add a new variable 'sme_only'. By default this variable should be set
>> to false. When mem_encrypt=sme is passed then set it to true and
>> based on sme_only state early_detect_mem_encrypt() can clear X86_FEATURE_SEV
>> flag.
> Why would you need yet another variable? We have sev_enabled already?!?


Because sev_enabled will always be 'false' when we are booting on bare
metal. Whereas when we are running under hypervisor then this variable
will be true for the SEV guest, please see [1]. Both sev_active() and
sme_active() make use of this variable hence we will not be able to set
the sev_enabled variable on bare metal. Basically none of the SEV cases
will be executed on bare metal -- only thing which we need to take care
of is clearing the X86_FEATURE_SEV flag so that hypervisor will never
launch SEV guest when mem_encrypt=sme option is provided.


[1] https://marc.info/?l=linux-kernel&m=150672050612826&w=2

>
Borislav Petkov Oct. 2, 2017, 12:41 p.m. UTC | #3
On Mon, Oct 02, 2017 at 06:32:18AM -0500, Brijesh Singh wrote:
> Because sev_enabled will always be 'false' when we are booting on bare
> metal. Whereas when we are running under hypervisor then this variable
> will be true for the SEV guest, please see [1].

Ok, then. This needs absolutely to be documented. Please add a comment
over sev_enabled's definition.

> Both sev_active() and sme_active() make use of this variable
> hence we will not be able to set the sev_enabled variable on bare
> metal. Basically none of the SEV cases will be executed on bare
> metal -- only thing which we need to take care of is clearing the
> X86_FEATURE_SEV flag so that hypervisor will never launch SEV guest
> when mem_encrypt=sme option is provided.

In that case, you want to disable SEV at the guest loading point,
i.e., sev_guest_init() AFAICT is the earliest time we start prepping
a SEV guest. You can add a __setup() early param which parses
"mem_encrypt=sme", to arch/x86/kernel/cpu/amd.c and which sets a
sev_host_enabled bool or so. sev_guest_init() can then check that
variable before going any further. No need for any of that early parsing
changes.

I'll send a patch with the rest of my cleanups ontop of yours later.

Thx.
Brijesh Singh Oct. 2, 2017, 3:07 p.m. UTC | #4
On 10/02/2017 07:41 AM, Borislav Petkov wrote:
> On Mon, Oct 02, 2017 at 06:32:18AM -0500, Brijesh Singh wrote:
>> Because sev_enabled will always be 'false' when we are booting on bare
>> metal. Whereas when we are running under hypervisor then this variable
>> will be true for the SEV guest, please see [1].
> 
> Ok, then. This needs absolutely to be documented. Please add a comment
> over sev_enabled's definition.
> 
>> Both sev_active() and sme_active() make use of this variable
>> hence we will not be able to set the sev_enabled variable on bare
>> metal. Basically none of the SEV cases will be executed on bare
>> metal -- only thing which we need to take care of is clearing the
>> X86_FEATURE_SEV flag so that hypervisor will never launch SEV guest
>> when mem_encrypt=sme option is provided.
> 
> In that case, you want to disable SEV at the guest loading point,
> i.e., sev_guest_init() AFAICT is the earliest time we start prepping
> a SEV guest. You can add a __setup() early param which parses
> "mem_encrypt=sme", to arch/x86/kernel/cpu/amd.c and which sets a
> sev_host_enabled bool or so. sev_guest_init() can then check that
> variable before going any further. No need for any of that early parsing
> changes.


Yep, that will work just fine. There are couple of ways we can limit
hypervisor from creating the SEV guest 1) clear the X86_FEATURE_SEV bit
when mem_encrypt=sme is passed or 2) parse the mem_encrypt=xxx in kvm-amd.ko
and fail the KVM_SEV_INIT when mem_encrpt=sme or mem_encrypt=off.

I was not sure which way to go. We can go with #2 and kvm folks have any
concern then we can go back to #1. I will update KVM_SEV_INIT patch and
send you v4.1.

-Brijesh
Paolo Bonzini Oct. 3, 2017, 10:50 a.m. UTC | #5
On 02/10/2017 17:07, Brijesh Singh wrote:
> 
> 
> Yep, that will work just fine. There are couple of ways we can limit
> hypervisor from creating the SEV guest 1) clear the X86_FEATURE_SEV bit
> when mem_encrypt=sme is passed or 2) parse the mem_encrypt=xxx in
> kvm-amd.ko
> and fail the KVM_SEV_INIT when mem_encrpt=sme or mem_encrypt=off.

Stupid question ahead: if it's just about guests, why bother with
mem_encrypt=xxx at all?  kvm_amd should have a sev parameter anyway, you
can just do kvm_amd.sev=0 to disable it.

Paolo
Borislav Petkov Oct. 3, 2017, 11:20 a.m. UTC | #6
On Tue, Oct 03, 2017 at 12:50:15PM +0200, Paolo Bonzini wrote:
> Stupid question ahead: if it's just about guests, why bother with
> mem_encrypt=xxx at all?  kvm_amd should have a sev parameter anyway, you
> can just do kvm_amd.sev=0 to disable it.

Yes, it is only about guests so this could be a viable solution too.
I initially wanted to be able to disable all that SEV code but from a
quick glance over it, it is mostly behind an if (sev_active()) check so
I think the module arg should be good enough too.

Thx.
diff mbox

Patch

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 175310f00202..73a6fb3b14a1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -19,7 +19,7 @@ 
 
 #include <asm/bootparam.h>
 
-extern bool sev_enabled;
+extern bool sme_only;
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index d0669f3966a6..a09b02959874 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -33,7 +33,7 @@  static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);
  */
 static u32 nodes_per_socket = 1;
 
-bool sev_enabled __section(.data) = false;
+bool sme_only __section(.data) = false;
 
 static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
 {
@@ -591,7 +591,7 @@  static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
 		if (IS_ENABLED(CONFIG_X86_32))
 			goto clear_all;
 
-		if (!sev_enabled)
+		if (sme_only)
 			goto clear_sev;
 
 		rdmsrl(MSR_K7_HWCR, msr);
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 9b83bc1be7c0..a135e4497021 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -45,6 +45,7 @@  u64 sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(__sev);
 EXPORT_SYMBOL_GPL(__sev);
+static bool sev_enabled __section(.data) = false;
 
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE);
@@ -773,7 +774,6 @@  void __init __nostackprotector sme_enable(struct boot_params *bp)
 	unsigned long feature_mask;
 	u64 me_mask, msr;
 	char buffer[16];
-	bool sme_only;
 	int ret;
 
 	/* Check for the SME/SEV support leaf */
@@ -808,6 +808,8 @@  void __init __nostackprotector sme_enable(struct boot_params *bp)
 	if (!(eax & feature_mask))
 		return;
 
+	me_mask = BIT_ULL(ebx & 0x3f);
+
 	/* For SME, check the SYSCFG MSR */
 	if (feature_mask == AMD_SME_BIT) {
 		msr = __rdmsr(MSR_K8_SYSCFG);
@@ -820,9 +822,13 @@  void __init __nostackprotector sme_enable(struct boot_params *bp)
 		msr = __rdmsr(MSR_AMD64_SEV);
 		if (!(msr & MSR_AMD64_SEV_ENABLED))
 			return;
-	}
 
-	me_mask = BIT_ULL(ebx & 0x3f);
+		if (feature_mask == AMD_SEV_BIT) {
+			sme_me_mask = me_mask;
+			sev_enabled = true;
+			return;
+		}
+	}
 
 	/*
 	 * Fixups have not been applied to phys_base yet and we're running
@@ -847,16 +853,11 @@  void __init __nostackprotector sme_enable(struct boot_params *bp)
 	} else if (!strncmp(buffer, cmd_on, sizeof(buffer))) {
 		sme_me_mask = me_mask;
 	} else if (!strncmp(buffer, cmd_sme, sizeof(buffer))) {
+		sme_me_mask = me_mask;
 		sme_only = true;
+		return;
 	}
 
 	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT))
 		sme_me_mask = me_mask;
-
-	if (sme_only)
-		return;
-
-	/* For SEV, check the SEV MSR */
-	if (feature_mask == AMD_SEV_BIT)
-		sev_enabled = true;
 }