Message ID | 148846754069.2349.4698319264278045964.stgit@brijesh-build-machine (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Herbert Xu |
Headers | show |
On Thu, Mar 02, 2017 at 10:12:20AM -0500, Brijesh Singh wrote: > From: Tom Lendacky <thomas.lendacky@amd.com> > > Provide support for Secure Encyrpted Virtualization (SEV). This initial > support defines a flag that is used by the kernel to determine if it is > running with SEV active. > > Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Btw, you need to add your Signed-off-by here after Tom's to denote that you're handing that patch forward.
On Thu, Mar 02, 2017 at 10:12:20AM -0500, Brijesh Singh wrote: > From: Tom Lendacky <thomas.lendacky@amd.com> > > Provide support for Secure Encyrpted Virtualization (SEV). This initial > support defines a flag that is used by the kernel to determine if it is > running with SEV active. > > Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> > --- > arch/x86/include/asm/mem_encrypt.h | 14 +++++++++++++- > arch/x86/mm/mem_encrypt.c | 3 +++ > include/linux/mem_encrypt.h | 6 ++++++ > 3 files changed, 22 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h > index 1fd5426..9799835 100644 > --- a/arch/x86/include/asm/mem_encrypt.h > +++ b/arch/x86/include/asm/mem_encrypt.h > @@ -20,10 +20,16 @@ > #ifdef CONFIG_AMD_MEM_ENCRYPT > > extern unsigned long sme_me_mask; > +extern unsigned int sev_enabled; So there's a function name sev_enabled() and an int sev_enabled too. It looks to me like you want to call the function "sev_enable()" - similar to sme_enable(), convert it to C code - i.e., I don't see what would speak against it - and rename that sev_enc_bit to sev_enabled and use it everywhere when testing SEV status. > static inline bool sme_active(void) > { > - return (sme_me_mask) ? true : false; > + return (sme_me_mask && !sev_enabled) ? true : false; > +} > + > +static inline bool sev_active(void) > +{ > + return (sme_me_mask && sev_enabled) ? true : false; Then, those read strange: like SME and SEV are mutually exclusive. Why? I might have an idea but I'd like for you to confirm it :-) Then, you're calling sev_enabled in startup_32() but we can enter in arch/x86/boot/compressed/head_64.S::startup_64() too, when we're loaded by a 64-bit bootloader, which would then theoretically bypass sev_enabled().
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 1fd5426..9799835 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -20,10 +20,16 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT extern unsigned long sme_me_mask; +extern unsigned int sev_enabled; static inline bool sme_active(void) { - return (sme_me_mask) ? true : false; + return (sme_me_mask && !sev_enabled) ? true : false; +} + +static inline bool sev_active(void) +{ + return (sme_me_mask && sev_enabled) ? true : false; } static inline u64 sme_dma_mask(void) @@ -53,6 +59,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size); #ifndef sme_me_mask #define sme_me_mask 0UL +#define sev_enabled 0 static inline bool sme_active(void) { @@ -64,6 +71,11 @@ static inline u64 sme_dma_mask(void) return 0ULL; } +static inline bool sev_active(void) +{ + return false; +} + static inline int set_memory_encrypted(unsigned long vaddr, int numpages) { return 0; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index c5062e1..090419b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -34,6 +34,9 @@ void __init __early_pgtable_flush(void); unsigned long sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); +unsigned int sev_enabled __section(.data) = 0; +EXPORT_SYMBOL_GPL(sev_enabled); + /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 913cf80..4b47c73 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -23,6 +23,7 @@ #ifndef sme_me_mask #define sme_me_mask 0UL +#define sev_enabled 0 static inline bool sme_active(void) { @@ -34,6 +35,11 @@ static inline u64 sme_dma_mask(void) return 0ULL; } +static inline bool sev_active(void) +{ + return false; +} + static inline int set_memory_encrypted(unsigned long vaddr, int numpages) { return 0;