diff mbox

[RFC,v3,04/20] x86: Handle reduction in physical address size with SME

Message ID 20161110003513.3280.12104.stgit@tlendack-t1.amdoffice.net (mailing list archive)
State New, archived
Headers show

Commit Message

Tom Lendacky Nov. 10, 2016, 12:35 a.m. UTC
When System Memory Encryption (SME) is enabled, the physical address
space is reduced. Adjust the x86_phys_bits value to reflect this
reduction.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/msr-index.h |    2 ++
 arch/x86/kernel/cpu/common.c     |   30 ++++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Joerg Roedel Nov. 15, 2016, 12:10 p.m. UTC | #1
On Wed, Nov 09, 2016 at 06:35:13PM -0600, Tom Lendacky wrote:
> +/*
> + * AMD Secure Memory Encryption (SME) can reduce the size of the physical
> + * address space if it is enabled, even if memory encryption is not active.
> + * Adjust x86_phys_bits if SME is enabled.
> + */
> +static void phys_bits_adjust(struct cpuinfo_x86 *c)
> +{

Better call this function amd_sme_phys_bits_adjust(). This name makes it
clear at the call-site why it is there and what it does.

> +	u32 eax, ebx, ecx, edx;
> +	u64 msr;
> +
> +	if (c->x86_vendor != X86_VENDOR_AMD)
> +		return;
> +
> +	if (c->extended_cpuid_level < 0x8000001f)
> +		return;
> +
> +	/* Check for SME feature */
> +	cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);
> +	if (!(eax & 0x01))
> +		return;

Maybe add a comment here why you can't use cpu_has (yet).

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov Nov. 15, 2016, 12:14 p.m. UTC | #2
On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
> Maybe add a comment here why you can't use cpu_has (yet).

So that could be alleviated by moving this function *after*
init_scattered_cpuid_features(). Then you can simply do *cpu_has().

Also, I'm not sure why we're checking CPUID for the SME feature when we
have sme_get_me_mask() et al which have been setup much earlier...
Tom Lendacky Nov. 15, 2016, 2:32 p.m. UTC | #3
On 11/15/2016 6:10 AM, Joerg Roedel wrote:
> On Wed, Nov 09, 2016 at 06:35:13PM -0600, Tom Lendacky wrote:
>> +/*
>> + * AMD Secure Memory Encryption (SME) can reduce the size of the physical
>> + * address space if it is enabled, even if memory encryption is not active.
>> + * Adjust x86_phys_bits if SME is enabled.
>> + */
>> +static void phys_bits_adjust(struct cpuinfo_x86 *c)
>> +{
> 
> Better call this function amd_sme_phys_bits_adjust(). This name makes it
> clear at the call-site why it is there and what it does.

Will do.

> 
>> +	u32 eax, ebx, ecx, edx;
>> +	u64 msr;
>> +
>> +	if (c->x86_vendor != X86_VENDOR_AMD)
>> +		return;
>> +
>> +	if (c->extended_cpuid_level < 0x8000001f)
>> +		return;
>> +
>> +	/* Check for SME feature */
>> +	cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);
>> +	if (!(eax & 0x01))
>> +		return;
> 
> Maybe add a comment here why you can't use cpu_has (yet).
> 

Ok, will do.

Thanks,
Tom

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tom Lendacky Nov. 15, 2016, 2:40 p.m. UTC | #4
On 11/15/2016 6:14 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
>> Maybe add a comment here why you can't use cpu_has (yet).
> 
> So that could be alleviated by moving this function *after*
> init_scattered_cpuid_features(). Then you can simply do *cpu_has().

Yes, I can move it after init_scattered_cpuid_features() and then use
the cpu_has() function.  I'll make sure to include a comment that the
function needs to be called after init_scattered_cpuid_features().

> 
> Also, I'm not sure why we're checking CPUID for the SME feature when we
> have sme_get_me_mask() et al which have been setup much earlier...
> 

The feature may be present and enabled even if it is not currently
active.  In other words, the SYS_CFG MSR bit could be set but we aren't
actually using encryption (sme_me_mask is 0).  As long as the SYS_CFG
MSR bit is set we need to take into account the physical reduction in
address space.

Thanks,
Tom
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov Nov. 15, 2016, 3:33 p.m. UTC | #5
On Tue, Nov 15, 2016 at 08:40:05AM -0600, Tom Lendacky wrote:
> The feature may be present and enabled even if it is not currently
> active.  In other words, the SYS_CFG MSR bit could be set but we aren't
> actually using encryption (sme_me_mask is 0).  As long as the SYS_CFG
> MSR bit is set we need to take into account the physical reduction in
> address space.

But later in the series I see sme_early_mem_enc() which tests exactly
that mask.

And in patch 12 you have:

+       /*
+        * If memory encryption is active, the trampoline area will need to
+        * be in un-encrypted memory in order to bring up other processors
+        * successfully.
+        */
+       sme_early_mem_dec(__pa(base), size);
+       sme_set_mem_unenc(base, size);

What's up?

IOW, it all sounds to me like you want to have an sme_active() helper
and use it everywhere.
Tom Lendacky Nov. 15, 2016, 4:06 p.m. UTC | #6
On 11/15/2016 9:33 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 08:40:05AM -0600, Tom Lendacky wrote:
>> The feature may be present and enabled even if it is not currently
>> active.  In other words, the SYS_CFG MSR bit could be set but we aren't
>> actually using encryption (sme_me_mask is 0).  As long as the SYS_CFG
>> MSR bit is set we need to take into account the physical reduction in
>> address space.
> 
> But later in the series I see sme_early_mem_enc() which tests exactly
> that mask.

Yes, but that doesn't relate to the physical address space reduction.

Once the SYS_CFG MSR bit for SME is set, even if the encryption bit is
never used, there is a physical reduction of the address space. So when
checking whether to adjust the physical address bits I can't rely on the
sme_me_mask, I have to look at the MSR.

But when I'm looking to decide whether to encrypt or decrypt something,
I use the sme_me_mask to decide if that is needed.  If the sme_me_mask
is not set then the encrypt/decrypt op shouldn't be performed.

I might not be grasping the point you're trying to make...

Thanks,
Tom

> 
> And in patch 12 you have:
> 
> +       /*
> +        * If memory encryption is active, the trampoline area will need to
> +        * be in un-encrypted memory in order to bring up other processors
> +        * successfully.
> +        */
> +       sme_early_mem_dec(__pa(base), size);
> +       sme_set_mem_unenc(base, size);
> 
> What's up?
> 
> IOW, it all sounds to me like you want to have an sme_active() helper
> and use it everywhere.
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov Nov. 15, 2016, 4:33 p.m. UTC | #7
On Tue, Nov 15, 2016 at 10:06:16AM -0600, Tom Lendacky wrote:
> Yes, but that doesn't relate to the physical address space reduction.
> 
> Once the SYS_CFG MSR bit for SME is set, even if the encryption bit is
> never used, there is a physical reduction of the address space. So when
> checking whether to adjust the physical address bits I can't rely on the
> sme_me_mask, I have to look at the MSR.
> 
> But when I'm looking to decide whether to encrypt or decrypt something,
> I use the sme_me_mask to decide if that is needed.  If the sme_me_mask
> is not set then the encrypt/decrypt op shouldn't be performed.
> 
> I might not be grasping the point you're trying to make...

Ok, let me try to summarize how I see it. There are a couple of states:

* CPUID bit in 0x8000001f - that's SME supported

* Reduction of address space - MSR bit. That could be called "SME
BIOS-eenabled".

* SME active. That's both of the above and is sme_me_mask != 0.

Right?

So you said previously "The feature may be present and enabled even if
it is not currently active."

But then you say "active" below

> > And in patch 12 you have:
> > 
> > +       /*
> > +        * If memory encryption is active, the trampoline area will need to
> > +        * be in un-encrypted memory in order to bring up other processors
> > +        * successfully.
> > +        */
> > +       sme_early_mem_dec(__pa(base), size);
> > +       sme_set_mem_unenc(base, size);

and test sme_me_mask. Which makes sense now after having explained which
hw setting controls what.

So can we agree on the nomenclature for all the different SME states
first and use those throughout the code? And hold those states down in
Documentation/x86/amd-memory-encryption.txt so that it is perfectly
clear to people looking at the code.

Also, if we need to check those states more than once, we should add
inline helpers:

sme_supported()
sme_bios_enabled()
sme_active()

How does that sound?
Tom Lendacky Nov. 15, 2016, 5:08 p.m. UTC | #8
On 11/15/2016 10:33 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 10:06:16AM -0600, Tom Lendacky wrote:
>> Yes, but that doesn't relate to the physical address space reduction.
>>
>> Once the SYS_CFG MSR bit for SME is set, even if the encryption bit is
>> never used, there is a physical reduction of the address space. So when
>> checking whether to adjust the physical address bits I can't rely on the
>> sme_me_mask, I have to look at the MSR.
>>
>> But when I'm looking to decide whether to encrypt or decrypt something,
>> I use the sme_me_mask to decide if that is needed.  If the sme_me_mask
>> is not set then the encrypt/decrypt op shouldn't be performed.
>>
>> I might not be grasping the point you're trying to make...
> 
> Ok, let me try to summarize how I see it. There are a couple of states:
> 
> * CPUID bit in 0x8000001f - that's SME supported
> 
> * Reduction of address space - MSR bit. That could be called "SME
> BIOS-eenabled".
> 
> * SME active. That's both of the above and is sme_me_mask != 0.
> 
> Right?

Correct.

> 
> So you said previously "The feature may be present and enabled even if
> it is not currently active."
> 
> But then you say "active" below
> 
>>> And in patch 12 you have:
>>>
>>> +       /*
>>> +        * If memory encryption is active, the trampoline area will need to
>>> +        * be in un-encrypted memory in order to bring up other processors
>>> +        * successfully.
>>> +        */
>>> +       sme_early_mem_dec(__pa(base), size);
>>> +       sme_set_mem_unenc(base, size);
> 
> and test sme_me_mask. Which makes sense now after having explained which
> hw setting controls what.
> 
> So can we agree on the nomenclature for all the different SME states
> first and use those throughout the code? And hold those states down in
> Documentation/x86/amd-memory-encryption.txt so that it is perfectly
> clear to people looking at the code.

Yup, that sounds good.  I'll update the documentation to clarify the
various states/modes of SME.

> 
> Also, if we need to check those states more than once, we should add
> inline helpers:
> 
> sme_supported()
> sme_bios_enabled()
> sme_active()
> 
> How does that sound?

Sounds good.

Thanks,
Tom

> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tom Lendacky Nov. 15, 2016, 9:22 p.m. UTC | #9
On 11/15/2016 6:14 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
>> Maybe add a comment here why you can't use cpu_has (yet).
> 
> So that could be alleviated by moving this function *after*
> init_scattered_cpuid_features(). Then you can simply do *cpu_has().
> 

Hmmm... I still need the ebx value from the CPUID instruction to
calculate the proper reduction in physical bits, so I'll still need
to make the CPUID call.

Thanks,
Tom

> Also, I'm not sure why we're checking CPUID for the SME feature when we
> have sme_get_me_mask() et al which have been setup much earlier...
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov Nov. 15, 2016, 9:33 p.m. UTC | #10
On Tue, Nov 15, 2016 at 03:22:45PM -0600, Tom Lendacky wrote:
> Hmmm... I still need the ebx value from the CPUID instruction to
> calculate the proper reduction in physical bits, so I'll still need
> to make the CPUID call.

        if (c->extended_cpuid_level >= 0x8000001f) {
                cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);

		...

just like the rest of get_cpu_cap() :)
Tom Lendacky Nov. 15, 2016, 10:01 p.m. UTC | #11
On 11/15/2016 3:33 PM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 03:22:45PM -0600, Tom Lendacky wrote:
>> Hmmm... I still need the ebx value from the CPUID instruction to
>> calculate the proper reduction in physical bits, so I'll still need
>> to make the CPUID call.
> 
>         if (c->extended_cpuid_level >= 0x8000001f) {
>                 cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);
> 
> 		...
> 
> just like the rest of get_cpu_cap() :)

Right, which is what the code does now.  I was looking at switching
over to the cpu_has() function and eliminate the cpuid call, but I
still need the cpuid call for the ebx value.

Thanks,
Tom

> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 56f4c66..4949259 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -336,6 +336,8 @@ 
 #define MSR_K8_TOP_MEM1			0xc001001a
 #define MSR_K8_TOP_MEM2			0xc001001d
 #define MSR_K8_SYSCFG			0xc0010010
+#define MSR_K8_SYSCFG_MEM_ENCRYPT_BIT	23
+#define MSR_K8_SYSCFG_MEM_ENCRYPT	BIT_ULL(MSR_K8_SYSCFG_MEM_ENCRYPT_BIT)
 #define MSR_K8_INT_PENDING_MSG		0xc0010055
 /* C1E active bits in int pending message */
 #define K8_INTP_C1E_ACTIVE_MASK		0x18000000
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9bd910a..82c64a6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -604,6 +604,35 @@  out:
 #endif
 }
 
+/*
+ * AMD Secure Memory Encryption (SME) can reduce the size of the physical
+ * address space if it is enabled, even if memory encryption is not active.
+ * Adjust x86_phys_bits if SME is enabled.
+ */
+static void phys_bits_adjust(struct cpuinfo_x86 *c)
+{
+	u32 eax, ebx, ecx, edx;
+	u64 msr;
+
+	if (c->x86_vendor != X86_VENDOR_AMD)
+		return;
+
+	if (c->extended_cpuid_level < 0x8000001f)
+		return;
+
+	/* Check for SME feature */
+	cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);
+	if (!(eax & 0x01))
+		return;
+
+	/* Check if SME is enabled */
+	rdmsrl(MSR_K8_SYSCFG, msr);
+	if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
+		return;
+
+	c->x86_phys_bits -= (ebx >> 6) & 0x3f;
+}
+
 static void get_cpu_vendor(struct cpuinfo_x86 *c)
 {
 	char *v = c->x86_vendor_id;
@@ -736,6 +765,7 @@  void get_cpu_cap(struct cpuinfo_x86 *c)
 
 		c->x86_virt_bits = (eax >> 8) & 0xff;
 		c->x86_phys_bits = eax & 0xff;
+		phys_bits_adjust(c);
 		c->x86_capability[CPUID_8000_0008_EBX] = ebx;
 	}
 #ifdef CONFIG_X86_32