Message ID | 20220128171804.569796-6-brijesh.singh@amd.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Guest Support | expand |
On Tue, Feb 01, 2022 at 07:08:21PM +0100, Borislav Petkov wrote: > On Fri, Jan 28, 2022 at 11:17:26AM -0600, Brijesh Singh wrote: > > diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S > > index fd9441f40457..49064a9f96e2 100644 > > --- a/arch/x86/boot/compressed/head_64.S > > +++ b/arch/x86/boot/compressed/head_64.S > > @@ -191,9 +191,8 @@ SYM_FUNC_START(startup_32) > > /* > > * Mark SEV as active in sev_status so that startup32_check_sev_cbit() > > * will do a check. The sev_status memory will be fully initialized > > That "sev_status memory" formulation is just weird. Pls fix it while > you're touching that comment. Will do. > > > +static inline u64 rd_sev_status_msr(void) > > +{ > > + unsigned long low, high; > > + > > + asm volatile("rdmsr" : "=a" (low), "=d" (high) : > > + "c" (MSR_AMD64_SEV)); > > + > > + return ((high << 32) | low); > > +} > > Don't you see sev_es_rd_ghcb_msr() in that same file above? Do a common > rdmsr() helper and call it where needed, pls, instead of duplicating > code. Unfortunately rdmsr()/wrmsr()/__rdmsr()/__wrmsr() etc. definitions are all already getting pulled in via: misc.h: #include linux/elf.h #include linux/thread_info.h #include linux/cpufeature.h #include linux/processor.h #include linux/msr.h Those definitions aren't usable in boot/compressed because of __ex_table and possibly some other dependency hellishness. Would read_msr()/write_msr() be reasonable alternative names for these new helpers, or something else that better distinguishes them from the kernel proper definitions? > > misc.h looks like a good place. It doesn't look like anything in boot/ pulls in boot/compressed/ headers. It seems to be the other way around, with boot/compressed pulling in headers and whole C files from boot/. So perhaps these new definitions should be added to a small boot/msr.h header and pulled in from there? > > Extra bonus points will be given if you unify callers in > arch/x86/boot/cpucheck.c too but you don't have to - I can do that > ontop. I have these new helpers defined with similar signatures to __rdmsr/__wrmsr: /* rdmsr/wrmsr helpers */ static inline u64 read_msr(unsigned int msr) { u64 low, high; asm volatile("rdmsr" : "=a" (low), "=d" (high) : "c" (msr)); return ((high << 32) | low); } static inline void write_msr(unsigned int msr, u32 low, u32 high) { asm volatile("wrmsr" : : "c" (msr), "a"(low), "d" (high) : "memory"); } but cpucheck.c code flow really lends itself to having a read_msr() variant that loads into 2 separate high/low u32 values, like what native_rdmsr does: #define native_rdmsr(msr, val1, val2) \ do { \ u64 __val = __rdmsr((msr)); \ (void)((val1) = (u32)__val); \ (void)((val2) = (u32)(__val >> 32)); \ } while (0) Should we introduce something like this as well for cpucheck.c? Or re-write cpucheck.c to make use of the u64 versions? Or just set the cpucheck.c rework aside for now? (but still introduce the above helpers as boot/msr.h in preparation)? Thanks, Mike > > Thx. > > -- > Regards/Gruss, > Boris. > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&data=04%7C01%7Cmichael.roth%40amd.com%7Cec7f8621a6934039cfff08d9e5addaca%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637793357136301050%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=FMoP5ZskuxwanWTe5DxMnIYNPBSi%2FhRrOExp9hIHaCo%3D&reserved=0
On Tue, Feb 01, 2022 at 10:28:39PM +0100, Borislav Petkov wrote: > On Tue, Feb 01, 2022 at 02:35:07PM -0600, Michael Roth wrote: > > Unfortunately rdmsr()/wrmsr()/__rdmsr()/__wrmsr() etc. definitions are all > > already getting pulled in via: > > > > misc.h: > > #include linux/elf.h > > #include linux/thread_info.h > > #include linux/cpufeature.h > > #include linux/processor.h > > #include linux/msr.h > > > > Those definitions aren't usable in boot/compressed because of __ex_table > > and possibly some other dependency hellishness. > > And they should not be. Mixing kernel proper and decompressor code needs > to stop and untangling that is a multi-year effort, unfortunately. ;-\ > > > Would read_msr()/write_msr() be reasonable alternative names for these new > > helpers, or something else that better distinguishes them from the > > kernel proper definitions? > > Nah, just call them rdmsr/wrmsr(). There is already {read,write}_msr() > tracepoint symbols in kernel proper and there's no point in keeping them > apart using different names - that ship has long sailed. Since the kernel proper rdmsr()/wrmsr() definitions are getting pulled in via misc.h, I have to use a different name to avoid compiler errors. For now I've gone with rd_msr()/wr_msr(), but no problem changing those if needed. > > Should we introduce something like this as well for cpucheck.c? Or > > re-write cpucheck.c to make use of the u64 versions? Or just set the > > cpucheck.c rework aside for now? (but still introduce the above helpers > > as boot/msr.h in preparation)? > > How about you model it after > > static int msr_read(u32 msr, struct msr *m) > > from arch/x86/lib/msr.c which takes struct msr from which you can return > either u32s or a u64? > > The stuff you share between the decompressor and kernel proper you put > in a arch/x86/include/asm/shared/ folder, for an example, see what we do > there in the TDX patchset: > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fr%2F20220124150215.36893-11-kirill.shutemov%40linux.intel.com&data=04%7C01%7Cmichael.roth%40amd.com%7C1662b963f1c54f3663df08d9e5c9d6bd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637793477399827883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=rtdo9ci3XjOHn2HphvNE7ciFR6tKh1pVclFuNhXkNGs%3D&reserved=0 > > I.e., you move struct msr in such a shared header and then you include > it everywhere needed. > > The arch/x86/boot/ msr helpers are then plain and simple, without > tracepoints and exception fixups and you define them in ...boot/msr.c or > so. > > If the patch gets too big, make sure to split it in a couple so that it > is clear what happens at each step. > > How does that sound? Thanks for the suggestions, this works out nicely, but as far as defining them in boot/msr.c, it looks like the Makefile in boot/compressed doesn't currently have any instances of linking to objects in boot/, and the status quo seems to be #include'ing the whole C file in cases where boot/ code is needed in boot/compressed. Since the rd_msr/wr_msr are oneliners, it seemed like it might be a little cleaner to just define them in boot/msr.h as static inline and include them directly as part of the header. Here's what it looks like on top of this tree, and roughly how I plan to split the patches for v10: - define the rd_msr/wr_msr helpers https://github.com/mdroth/linux/commit/982c6c5741478c8f634db8ac0ba36575b5eff946 - use the helpers in boot/compressed/sev.c and boot/cpucheck.c https://github.com/mdroth/linux/commit/a16e11f727c01fc478d3b741e1bdd2fd44975d7c For v10 though I'll likely just drop rd_sev_status_msr() completely and use rd_msr() directly. Let me know if I should make any changes and I'll make sure to get those in for the next spin. > > Thx. > > -- > Regards/Gruss, > Boris. > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.kernel.org%2Ftglx%2Fnotes-about-netiquette&data=04%7C01%7Cmichael.roth%40amd.com%7C1662b963f1c54f3663df08d9e5c9d6bd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637793477399827883%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=njDai06MS2mcyMLZ5YvLMcgXSXwfoO01U2c0D%2BE3HG4%3D&reserved=0
On Wed, Feb 02, 2022 at 07:09:24AM +0100, Borislav Petkov wrote: > On Tue, Feb 01, 2022 at 06:52:12PM -0600, Michael Roth wrote: > > Since the kernel proper rdmsr()/wrmsr() definitions are getting pulled in via > > misc.h, I have to use a different name to avoid compiler errors. For now I've > > gone with rd_msr()/wr_msr(), but no problem changing those if needed. > > Does that fix it too? > > diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h > index 16ed360b6692..346c46d072c8 100644 > --- a/arch/x86/boot/compressed/misc.h > +++ b/arch/x86/boot/compressed/misc.h > @@ -21,7 +21,6 @@ > > #include <linux/linkage.h> > #include <linux/screen_info.h> > -#include <linux/elf.h> > #include <linux/io.h> > #include <asm/page.h> > #include <asm/boot.h> > --- > > This is exactly what I mean with a multi-year effort of untangling what > has been mindlessly mixed in over the years... Indeed... it looks like linux/{elf,io,efi,acpi}.h all end up pulling in kernel proper's rdmsr()/wrmsr() definitions, and pulling them out ends up breaking a bunch of other stuff, so I think we might be stuck using a different name like rd_msr()/wr_msr() in the meantime.
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index fd9441f40457..49064a9f96e2 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -191,9 +191,8 @@ SYM_FUNC_START(startup_32) /* * Mark SEV as active in sev_status so that startup32_check_sev_cbit() * will do a check. The sev_status memory will be fully initialized - * with the contents of MSR_AMD_SEV_STATUS later in - * set_sev_encryption_mask(). For now it is sufficient to know that SEV - * is active. + * with the contents of MSR_AMD_SEV_STATUS later via sev_enable(). For + * now it is sufficient to know that SEV is active. */ movl $1, rva(sev_status)(%ebp) 1: @@ -447,6 +446,23 @@ SYM_CODE_START(startup_64) call load_stage1_idt popq %rsi +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * Now that the stage1 interrupt handlers are set up, #VC exceptions from + * CPUID instructions can be properly handled for SEV-ES guests. + * + * For SEV-SNP, the CPUID table also needs to be set up in advance of any + * CPUID instructions being issued, so go ahead and do that now via + * sev_enable(), which will also handle the rest of the SEV-related + * detection/setup to ensure that has been done in advance of any dependent + * code. + */ + pushq %rsi + movq %rsi, %rdi /* real mode address */ + call sev_enable + popq %rsi +#endif + /* * paging_prepare() sets up the trampoline and checks if we need to * enable 5-level paging. @@ -559,17 +575,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated) shrq $3, %rcx rep stosq -/* - * If running as an SEV guest, the encryption mask is required in the - * page-table setup code below. When the guest also has SEV-ES enabled - * set_sev_encryption_mask() will cause #VC exceptions, but the stage2 - * handler can't map its GHCB because the page-table is not set up yet. - * So set up the encryption mask here while still on the stage1 #VC - * handler. Then load stage2 IDT and switch to the kernel's own - * page-table. - */ pushq %rsi - call set_sev_encryption_mask call load_stage2_idt /* Pass boot_params to initialize_identity_maps() */ diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index a63424d13627..a73e4d783cae 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -187,42 +187,6 @@ SYM_CODE_END(startup32_vc_handler) .code64 #include "../../kernel/sev_verify_cbit.S" -SYM_FUNC_START(set_sev_encryption_mask) -#ifdef CONFIG_AMD_MEM_ENCRYPT - push %rbp - push %rdx - - movq %rsp, %rbp /* Save current stack pointer */ - - call get_sev_encryption_bit /* Get the encryption bit position */ - testl %eax, %eax - jz .Lno_sev_mask - - bts %rax, sme_me_mask(%rip) /* Create the encryption mask */ - - /* - * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in - * get_sev_encryption_bit() because this function is 32-bit code and - * shared between 64-bit and 32-bit boot path. - */ - movl $MSR_AMD64_SEV, %ecx /* Read the SEV MSR */ - rdmsr - - /* Store MSR value in sev_status */ - shlq $32, %rdx - orq %rdx, %rax - movq %rax, sev_status(%rip) - -.Lno_sev_mask: - movq %rbp, %rsp /* Restore original stack pointer */ - - pop %rdx - pop %rbp -#endif - - xor %rax, %rax - RET -SYM_FUNC_END(set_sev_encryption_mask) .data diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index 16ed360b6692..23e0e395084a 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -120,12 +120,12 @@ static inline void console_init(void) { } #endif -void set_sev_encryption_mask(void); - #ifdef CONFIG_AMD_MEM_ENCRYPT +void sev_enable(struct boot_params *bp); void sev_es_shutdown_ghcb(void); extern bool sev_es_check_ghcb_fault(unsigned long address); #else +static inline void sev_enable(struct boot_params *bp) { } static inline void sev_es_shutdown_ghcb(void) { } static inline bool sev_es_check_ghcb_fault(unsigned long address) { diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 28bcf04c022e..c88d7e17a71a 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -204,3 +204,47 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) else if (result != ES_RETRY) sev_es_terminate(GHCB_SEV_ES_GEN_REQ); } + +static inline u64 rd_sev_status_msr(void) +{ + unsigned long low, high; + + asm volatile("rdmsr" : "=a" (low), "=d" (high) : + "c" (MSR_AMD64_SEV)); + + return ((high << 32) | low); +} + +void sev_enable(struct boot_params *bp) +{ + unsigned int eax, ebx, ecx, edx; + + /* Check for the SME/SEV support leaf */ + eax = 0x80000000; + ecx = 0; + native_cpuid(&eax, &ebx, &ecx, &edx); + if (eax < 0x8000001f) + return; + + /* + * Check for the SME/SEV feature: + * CPUID Fn8000_001F[EAX] + * - Bit 0 - Secure Memory Encryption support + * - Bit 1 - Secure Encrypted Virtualization support + * CPUID Fn8000_001F[EBX] + * - Bits 5:0 - Pagetable bit position used to indicate encryption + */ + eax = 0x8000001f; + ecx = 0; + native_cpuid(&eax, &ebx, &ecx, &edx); + /* Check whether SEV is supported */ + if (!(eax & BIT(1))) + return; + + /* Set the SME mask if this is an SEV guest. */ + sev_status = rd_sev_status_msr(); + if (!(sev_status & MSR_AMD64_SEV_ENABLED)) + return; + + sme_me_mask = BIT_ULL(ebx & 0x3f); +}