From patchwork Mon May 15 16:59:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57751C77B7D for ; Mon, 15 May 2023 16:59:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243337AbjEOQ7b (ORCPT ); Mon, 15 May 2023 12:59:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243329AbjEOQ71 (ORCPT ); Mon, 15 May 2023 12:59:27 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5900176B2; Mon, 15 May 2023 09:59:25 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-6436dfa15b3so8796740b3a.1; Mon, 15 May 2023 09:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169965; x=1686761965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zNbwXFv2h7AZXwLU50k/FuVXLnfww6LY5qbBZ9rC48g=; b=Bx6XKr7PQRwhm5jpN36C9grYSGQ9OdU2oCZ8W64cc8SAeHXJNQYoOPhvq/RcaXPupt +8PAEa9QcfUk7zWxOnTgAtj8zK3nR5LHu7mphkqVS5prdmwiNkZYRuMhrqCMfCQxD7AZ iqU1SZPqIWJ4AxKUjo7QWPM21WYy2gS2ZLBwKEIT/l0P1WzBdt7gFoa8x7wKCrxVOrK+ w9es0/T9zs4qic05AJJ5a3a2BbBuefsMAw5hwe2SNigdQv9USEgG2mK2uJ0CisHc961J EcW7PsP1haRTNonSiyXiMdCXYKSf2cpt+qJj6Z1FiSX+3gOYxPS98ZZ5b/34nErh2tsY WapQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169965; x=1686761965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zNbwXFv2h7AZXwLU50k/FuVXLnfww6LY5qbBZ9rC48g=; b=X/USJCPEEhBGbbcA6CcnSv6OaJha2H5u5pxybUEZKJZ2/f6FjtuRn1IPhLfWykzgIJ oW7IHljTqJs4yPQOHXKXioQTzvAXS5Vx1PKrgueK9yT3MBHsl/2THyGsEMlLjIlVHby+ iTBlJgaJJSWwddP3fi2f3F8mgeuvoX43k3odJ+2yiXVVpyOxhZjUmiebrDTVFEiVW2Lt mwClw9WY8uD73obJETqnUtKo0ic/XLQHZSbUtsHaPJhHtzgHIAYkcBQzshfOKRytS6SX ZEH2zKSO5mf2yRr8fQBi6zbPEzrm3tZd4p4enqHn/fgVLsrbV2YN618+DuenbGXoD6Fa g0BA== X-Gm-Message-State: AC+VfDzfW0P6BMBjA1SjYCexloVwhpOfrLF4ReHOy7ExFD0PWszMdyUU 0WPvbAbngnjfVd3k6wyMi8A= X-Google-Smtp-Source: ACHHUZ6qva97b0CQidUd3KeHiuZMmFm2fHZAnT/0nKVQZwaVXkOCFAmDl2j6odNU45FVYogFjBfelA== X-Received: by 2002:a05:6a20:158a:b0:103:6f15:5b49 with SMTP id h10-20020a056a20158a00b001036f155b49mr23411923pzj.46.1684169964596; Mon, 15 May 2023 09:59:24 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:23 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 01/14] x86/sev: Add a #HV exception handler Date: Mon, 15 May 2023 12:59:03 -0400 Message-Id: <20230515165917.1306922-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add a #HV exception handler that uses IST stack. Co-developed-by: Kalra Ashish Signed-off-by: Tianyu Lan --- Change since RFC V5: * Merge Ashish Kalr patch https://github.com/ ashkalra/linux/commit/6975484094b7cb8d703c45066780dd85043cd040 Change since RFC V2: * Remove unnecessary line in the change log. --- arch/x86/entry/entry_64.S | 22 ++++++---- arch/x86/include/asm/cpu_entry_area.h | 6 +++ arch/x86/include/asm/idtentry.h | 40 +++++++++++++++++- arch/x86/include/asm/page_64_types.h | 1 + arch/x86/include/asm/trapnr.h | 1 + arch/x86/include/asm/traps.h | 1 + arch/x86/kernel/cpu/common.c | 1 + arch/x86/kernel/dumpstack_64.c | 9 ++++- arch/x86/kernel/idt.c | 1 + arch/x86/kernel/sev.c | 53 ++++++++++++++++++++++++ arch/x86/kernel/traps.c | 58 +++++++++++++++++++++++++++ arch/x86/mm/cpu_entry_area.c | 2 + 12 files changed, 183 insertions(+), 12 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index eccc3431e515..653b1f10699b 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -496,7 +496,7 @@ SYM_CODE_END(\asmsym) #ifdef CONFIG_AMD_MEM_ENCRYPT /** - * idtentry_vc - Macro to generate entry stub for #VC + * idtentry_sev - Macro to generate entry stub for #VC * @vector: Vector number * @asmsym: ASM symbol for the entry point * @cfunc: C function to be called @@ -515,14 +515,18 @@ SYM_CODE_END(\asmsym) * * The macro is only used for one vector, but it is planned to be extended in * the future for the #HV exception. - */ -.macro idtentry_vc vector asmsym cfunc +*/ +.macro idtentry_sev vector asmsym cfunc has_error_code:req SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS ENDBR ASM_CLAC cld + .if \vector == X86_TRAP_HV + pushq $-1 /* ORIG_RAX: no syscall */ + .endif + /* * If the entry is from userspace, switch stacks and treat it as * a normal entry. @@ -545,7 +549,12 @@ SYM_CODE_START(\asmsym) * stack. */ movq %rsp, %rdi /* pt_regs pointer */ - call vc_switch_off_ist + .if \vector == X86_TRAP_VC + call vc_switch_off_ist + .else + call hv_switch_off_ist + .endif + movq %rax, %rsp /* Switch to new stack */ ENCODE_FRAME_POINTER @@ -568,10 +577,7 @@ SYM_CODE_START(\asmsym) /* Switch to the regular task stack */ .Lfrom_usermode_switch_stack_\@: - idtentry_body user_\cfunc, has_error_code=1 - -_ASM_NOKPROBE(\asmsym) -SYM_CODE_END(\asmsym) + idtentry_body user_\cfunc, \has_error_code .endm #endif diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h index 462fc34f1317..2186ed601b4a 100644 --- a/arch/x86/include/asm/cpu_entry_area.h +++ b/arch/x86/include/asm/cpu_entry_area.h @@ -30,6 +30,10 @@ char VC_stack[optional_stack_size]; \ char VC2_stack_guard[guardsize]; \ char VC2_stack[optional_stack_size]; \ + char HV_stack_guard[guardsize]; \ + char HV_stack[optional_stack_size]; \ + char HV2_stack_guard[guardsize]; \ + char HV2_stack[optional_stack_size]; \ char IST_top_guard[guardsize]; \ /* The exception stacks' physical storage. No guard pages required */ @@ -52,6 +56,8 @@ enum exception_stack_ordering { ESTACK_MCE, ESTACK_VC, ESTACK_VC2, + ESTACK_HV, + ESTACK_HV2, N_EXCEPTION_STACKS }; diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index b241af4ce9b4..b0f3501b2767 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -317,6 +317,19 @@ static __always_inline void __##func(struct pt_regs *regs) __visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code); \ __visible noinstr void user_##func(struct pt_regs *regs, unsigned long error_code) + +/** + * DECLARE_IDTENTRY_HV - Declare functions for the HV entry point + * @vector: Vector number (ignored for C) + * @func: Function name of the entry point + * + * Maps to DECLARE_IDTENTRY_RAW, but declares also the user C handler. + */ +#define DECLARE_IDTENTRY_HV(vector, func) \ + DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func); \ + __visible noinstr void kernel_##func(struct pt_regs *regs); \ + __visible noinstr void user_##func(struct pt_regs *regs) + /** * DEFINE_IDTENTRY_IST - Emit code for IST entry points * @func: Function name of the entry point @@ -376,6 +389,26 @@ static __always_inline void __##func(struct pt_regs *regs) #define DEFINE_IDTENTRY_VC_USER(func) \ DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func) +/** + * DEFINE_IDTENTRY_HV_KERNEL - Emit code for HV injection handler + * when raised from kernel mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_KERNEL(func) \ + DEFINE_IDTENTRY_RAW(kernel_##func) + +/** + * DEFINE_IDTENTRY_HV_USER - Emit code for HV injection handler + * when raised from user mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_USER(func) \ + DEFINE_IDTENTRY_RAW(user_##func) + #else /* CONFIG_X86_64 */ /** @@ -463,8 +496,10 @@ __visible noinstr void func(struct pt_regs *regs, \ DECLARE_IDTENTRY(vector, func) # define DECLARE_IDTENTRY_VC(vector, func) \ - idtentry_vc vector asm_##func func + idtentry_sev vector asm_##func func has_error_code=1 +# define DECLARE_IDTENTRY_HV(vector, func) \ + idtentry_sev vector asm_##func func has_error_code=0 #else # define DECLARE_IDTENTRY_MCE(vector, func) \ DECLARE_IDTENTRY(vector, func) @@ -618,9 +653,10 @@ DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF, xenpv_exc_double_fault); DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_CP, exc_control_protection); #endif -/* #VC */ +/* #VC & #HV */ #ifdef CONFIG_AMD_MEM_ENCRYPT DECLARE_IDTENTRY_VC(X86_TRAP_VC, exc_vmm_communication); +DECLARE_IDTENTRY_HV(X86_TRAP_HV, exc_hv_injection); #endif #ifdef CONFIG_XEN_PV diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index e9e2c3ba5923..0bd7dab676c5 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -29,6 +29,7 @@ #define IST_INDEX_DB 2 #define IST_INDEX_MCE 3 #define IST_INDEX_VC 4 +#define IST_INDEX_HV 5 /* * Set __PAGE_OFFSET to the most negative possible address + diff --git a/arch/x86/include/asm/trapnr.h b/arch/x86/include/asm/trapnr.h index f5d2325aa0b7..c6583631cecb 100644 --- a/arch/x86/include/asm/trapnr.h +++ b/arch/x86/include/asm/trapnr.h @@ -26,6 +26,7 @@ #define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */ #define X86_TRAP_VE 20 /* Virtualization Exception */ #define X86_TRAP_CP 21 /* Control Protection Exception */ +#define X86_TRAP_HV 28 /* HV injected exception in SNP restricted mode */ #define X86_TRAP_VC 29 /* VMM Communication Exception */ #define X86_TRAP_IRET 32 /* IRET Exception */ diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 47ecfff2c83d..6795d3e517d6 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -16,6 +16,7 @@ asmlinkage __visible notrace struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs); void __init trap_init(void); asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *eregs); +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *eregs); #endif extern bool ibt_selftest(void); diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 8cd4126d8253..5bc44bcf6e48 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2172,6 +2172,7 @@ static inline void tss_setup_ist(struct tss_struct *tss) tss->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE); /* Only mapped when SEV-ES is active */ tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC); + tss->x86_tss.ist[IST_INDEX_HV] = __this_cpu_ist_top_va(HV); } #else /* CONFIG_X86_64 */ diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index f05339fee778..6d8f8864810c 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -26,11 +26,14 @@ static const char * const exception_stack_names[] = { [ ESTACK_MCE ] = "#MC", [ ESTACK_VC ] = "#VC", [ ESTACK_VC2 ] = "#VC2", + [ ESTACK_HV ] = "#HV", + [ ESTACK_HV2 ] = "#HV2", + }; const char *stack_type_name(enum stack_type type) { - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); if (type == STACK_TYPE_TASK) return "TASK"; @@ -89,6 +92,8 @@ struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = { EPAGERANGE(MCE), EPAGERANGE(VC), EPAGERANGE(VC2), + EPAGERANGE(HV), + EPAGERANGE(HV2), }; static __always_inline bool in_exception_stack(unsigned long *stack, struct stack_info *info) @@ -98,7 +103,7 @@ static __always_inline bool in_exception_stack(unsigned long *stack, struct stac struct pt_regs *regs; unsigned int k; - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); begin = (unsigned long)__this_cpu_read(cea_exception_stacks); /* diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index a58c6bc1cd68..48c0a7e1dbcb 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -113,6 +113,7 @@ static const __initconst struct idt_data def_idts[] = { #ifdef CONFIG_AMD_MEM_ENCRYPT ISTG(X86_TRAP_VC, asm_exc_vmm_communication, IST_INDEX_VC), + ISTG(X86_TRAP_HV, asm_exc_hv_injection, IST_INDEX_HV), #endif SYSG(X86_TRAP_OF, asm_exc_overflow), diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index b031244d6d2d..e25445de0957 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -2006,6 +2006,59 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) irqentry_exit_to_user_mode(regs); } +static bool hv_raw_handle_exception(struct pt_regs *regs) +{ + return false; +} + +static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) +{ + unsigned long sp = (unsigned long)regs; + + return (sp >= __this_cpu_ist_bottom_va(HV2) && sp < __this_cpu_ist_top_va(HV2)); +} + +DEFINE_IDTENTRY_HV_USER(exc_hv_injection) +{ + irqentry_enter_from_user_mode(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + /* + * Do not kill the machine if user-space triggered the + * exception. Send SIGBUS instead and let user-space deal + * with it. + */ + force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0); + } + + instrumentation_end(); + irqentry_exit_to_user_mode(regs); +} + +DEFINE_IDTENTRY_HV_KERNEL(exc_hv_injection) +{ + irqentry_state_t irq_state; + + irq_state = irqentry_enter(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + pr_emerg("PANIC: Unhandled #HV exception in kernel space\n"); + + /* Show some debug info */ + show_regs(regs); + + /* Ask hypervisor to sev_es_terminate */ + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SEV_ES_GEN_REQ); + + panic("Returned from Terminate-Request to Hypervisor\n"); + } + + instrumentation_end(); + irqentry_exit(regs, irq_state); +} + bool __init handle_vc_boot_ghcb(struct pt_regs *regs) { unsigned long exit_code = regs->orig_ax; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d317dc3d06a3..5dca05d0fa38 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -905,6 +905,64 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r return regs_ret; } + +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *regs) +{ + unsigned long sp, *stack; + struct stack_info info; + struct pt_regs *regs_ret; + + /* + * In the SYSCALL entry path the RSP value comes from user-space - don't + * trust it and switch to the current kernel stack + */ + if (ip_within_syscall_gap(regs)) { + sp = this_cpu_read(pcpu_hot.top_of_stack); + goto sync; + } + + /* + * From here on the RSP value is trusted. Now check whether entry + * happened from a safe stack. Not safe are the entry or unknown stacks, + * use the fall-back stack instead in this case. + */ + sp = regs->sp; + stack = (unsigned long *)sp; + + /* + * We support nested #HV exceptions once the IST stack is + * switched out. The HV can always inject an #HV, but as per + * GHCB specs, the HV will not inject another #HV, if + * PendingEvent.NoFurtherSignal is set and we only clear this + * after switching out the IST stack and handling the current + * #HV. But there is still a window before the IST stack is + * switched out, where a malicious HV can inject nested #HV. + * The code below checks the interrupted stack to check if + * it is the IST stack, and if so panic as this is + * not supported and this nested #HV would have corrupted + * the iret frame of the previous #HV on the IST stack. + */ + if (get_stack_info_noinstr(stack, current, &info) && + (info.type == (STACK_TYPE_EXCEPTION + ESTACK_HV) || + info.type == (STACK_TYPE_EXCEPTION + ESTACK_HV2))) + panic("Nested #HV exception, HV IST corrupted, stack type = %d\n", info.type); + + if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY || + info.type > STACK_TYPE_EXCEPTION_LAST) + sp = __this_cpu_ist_top_va(HV2); +sync: + /* + * Found a safe stack - switch to it as if the entry didn't happen via + * IST stack. The code below only copies pt_regs, the real switch happens + * in assembly code. + */ + sp = ALIGN_DOWN(sp, 8) - sizeof(*regs_ret); + + regs_ret = (struct pt_regs *)sp; + *regs_ret = *regs; + + return regs_ret; +} #endif asmlinkage __visible noinstr struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs) diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index e91500a80963..97554fa0ff30 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -160,6 +160,8 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu) if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) { cea_map_stack(VC); cea_map_stack(VC2); + cea_map_stack(HV); + cea_map_stack(HV2); } } } From patchwork Mon May 15 16:59:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99E02C7EE26 for ; Mon, 15 May 2023 16:59:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243355AbjEOQ7e (ORCPT ); Mon, 15 May 2023 12:59:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243336AbjEOQ72 (ORCPT ); Mon, 15 May 2023 12:59:28 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B275BA3; Mon, 15 May 2023 09:59:27 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id 41be03b00d2f7-51f6461af24so9045850a12.2; Mon, 15 May 2023 09:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169967; x=1686761967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cvibDiw0YAeqW1tnLVhywXPW6u1gg+0sroPN7cRMLpI=; b=A/mTC3tAK9hBc18irOO6lZqLzOlnjfLlO8KVk95Rz7Z4FMLapsm9Prd389aAN3n1eb Lzw4jSKT2sy/+tUf6QX4WZXqjCxVpt5u82bs8Sukawkz+HFcIZgmZRgAMtzAzZk0aTPJ zOZXFYS/hN90Q9lCCQeRDAuOoQWW266Xj59pjkM/TEvP5k6c22tMyu1oq7WyixHY27dA LgYAmUGJOX+0s1WMQAE/zzwyZpGO12zLGkoPwlx9rq0aw73ERL4mdOgysIuzcdkQW222 Sj5PI4vMBlYTkvL7JMWWRcdgh8KFG8jo289qQnl5glKeFyvHkOC90qlj0zBG8hUSrX26 Hb1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169967; x=1686761967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cvibDiw0YAeqW1tnLVhywXPW6u1gg+0sroPN7cRMLpI=; b=AOz7vPHt73BAMLQ8v1HJ2crgd7icnr+rIWZxGj6+H43R7ZQE5WhPO/f8K21PXcevET CpoypfI/boAxPXJCnFm0S2SlO4pf6CiJWC94MCIOYLIfyTAiJx4PGlAgO1GLOfmzfzjt hBzZSKq02HrdgGVXorsPBpcm+02/BU7IPok4nAdZM83rRi8eDd801iOiBqWFv2ND6IoH xxOj4s17YF+jHUcUeLkwG3B41udoi6eBfGa49MZ9Jsr2YiSuF1KFug4br03PYwMcsXHF KdW8eiLVOoLM8xSDVhE9sasVeSwAaj7Gm1eVo8hCLvHermzEJtLHOMo3zZmF4YjfNfb5 BwzA== X-Gm-Message-State: AC+VfDzxYtIjUqxk8QDCjbraRKVq+C89QoVXalDvEigfBsy0btrstLlB ML7Hkhr8atO9VWKU0/C41dk= X-Google-Smtp-Source: ACHHUZ77ANULLGa98E4kXzxRUlLt7WwKp+4kBtzghIEtv72dmk6DaSQiqgYdhWvKNzfBY3W2lsVdxg== X-Received: by 2002:a17:90a:890a:b0:250:ce6a:cf1a with SMTP id u10-20020a17090a890a00b00250ce6acf1amr19255279pjn.38.1684169966607; Mon, 15 May 2023 09:59:26 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:25 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 02/14] x86/sev: Add Check of #HV event in path Date: Mon, 15 May 2023 12:59:04 -0400 Message-Id: <20230515165917.1306922-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add check_hv_pending() and check_hv_pending_after_irq() to check queued #HV event when irq is disabled. Signed-off-by: Tianyu Lan --- arch/x86/entry/entry_64.S | 18 ++++++++++++++++ arch/x86/include/asm/irqflags.h | 14 +++++++++++- arch/x86/kernel/sev.c | 38 +++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 653b1f10699b..147b850babf6 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1019,6 +1019,15 @@ SYM_CODE_END(paranoid_entry) * R15 - old SPEC_CTRL */ SYM_CODE_START_LOCAL(paranoid_exit) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS /* @@ -1143,6 +1152,15 @@ SYM_CODE_START(error_entry) SYM_CODE_END(error_entry) SYM_CODE_START_LOCAL(error_return) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS DEBUG_ENTRY_ASSERT_IRQS_OFF testb $3, CS(%rsp) diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 8c5ae649d2df..d09ec6d76591 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -11,6 +11,10 @@ /* * Interrupt control: */ +#ifdef CONFIG_AMD_MEM_ENCRYPT +void check_hv_pending(struct pt_regs *regs); +void check_hv_pending_irq_enable(void); +#endif /* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ extern inline unsigned long native_save_fl(void); @@ -40,12 +44,20 @@ static __always_inline void native_irq_disable(void) static __always_inline void native_irq_enable(void) { asm volatile("sti": : :"memory"); +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif } static __always_inline void native_safe_halt(void) { mds_idle_clear_cpu_buffers(); - asm volatile("sti; hlt": : :"memory"); + asm volatile("sti": : :"memory"); + +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif + asm volatile("hlt": : :"memory"); } static __always_inline void native_halt(void) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index e25445de0957..ff5eab48bfe2 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -181,6 +181,44 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } +static void do_exc_hv(struct pt_regs *regs) +{ + /* Handle #HV exception. */ +} + +void check_hv_pending(struct pt_regs *regs) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + if ((regs->flags & X86_EFLAGS_IF) == 0) + return; + + do_exc_hv(regs); +} + +void check_hv_pending_irq_enable(void) +{ + struct pt_regs regs; + + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + memset(®s, 0, sizeof(struct pt_regs)); + asm volatile("movl %%cs, %%eax;" : "=a" (regs.cs)); + asm volatile("movl %%ss, %%eax;" : "=a" (regs.ss)); + regs.orig_ax = 0xffffffff; + regs.flags = native_save_fl(); + + /* + * Disable irq when handle pending #HV events after + * re-enabling irq. + */ + asm volatile("cli" : : : "memory"); + do_exc_hv(®s); + asm volatile("sti" : : : "memory"); +} + void noinstr __sev_es_ist_exit(void) { unsigned long ist; From patchwork Mon May 15 16:59:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F27FC7EE26 for ; Mon, 15 May 2023 17:00:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243485AbjEORAF (ORCPT ); Mon, 15 May 2023 13:00:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243362AbjEOQ7h (ORCPT ); Mon, 15 May 2023 12:59:37 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F6C45FCA; Mon, 15 May 2023 09:59:29 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 41be03b00d2f7-53063897412so5658702a12.0; Mon, 15 May 2023 09:59:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169968; x=1686761968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Q5zMqzIM1Yd79e6/EeCYzET5K3g+P8Lp3kx3ilDsSeQ=; b=MfOWTFkL6ZCZsDxnxcb3AXR1O20muVNAxfI2x0k+KxTl4KDc/C4z0Mj/sPnOLeChke JWvnth9yJj+GRIxrr/vtlJT1czPy4t+Oh/rZ1xcz0GIH36DJzf9Ny6C8wx69DeqSeFQg HMt/xSe80tIHDet6DfHukSU2YZ3Bbz4dxn01Pr7xl1qWU/PZ3gYF+I2RfxnrG/X15niK EQqU/1DzEcFqEm1VWUGpL2pztV1pBsf8qpC2N/OUbLBKRPeFhwwgvyJlPNFfaKik/K94 +sKK2Cj4RO3kQcQ7dhN0U0iHYlbBm7VAOXW5RWaWBboUsJyEbUPk6qIe7iEs0aXGDRml NNzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169968; x=1686761968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q5zMqzIM1Yd79e6/EeCYzET5K3g+P8Lp3kx3ilDsSeQ=; b=SvZtXsa8Tf7d7e8uvroBFxzur4wVqiNiKY9ttwogehY+KONTzhiez+TMUYTrRZ6D88 yXGIQHCZJmK2rEktdy/kOSigjDR3nTXmwZiOKitO2wtQJxABxK+rL87yhjZLFV+Ta6Lq gmKVnGIyOo7+nhq/Ljy+JbGdEmp2XUH76q0GEH7CBAxyx/Cq8xehU1ty5L8+DW/tt45O 9q7L/tUxvccc7qWhgvSkGphVfwDXrwBRHW3JMNvw9e2okrC1riW55+I+dMa7DTcrHgiR mVYdhuCHFSHeZq7PUq2OoFFt2LLK7e0f9jUE9n8PMzbEpXeoakggXqY58A5RHiHQdOit 7Yjg== X-Gm-Message-State: AC+VfDzgJ7GnAQbysVBdllkQxz6K6pfz15PNRfffFvwtPefnH31swM2B ndqVuvBFRsrpdiPCVl0t98c= X-Google-Smtp-Source: ACHHUZ4cz5RJuEbwK/I7LzphkIRWDDgcmJTVyISxzKmbdXs6m3WM6ywqTUUNdjLqBCWPeR3PzQaZ7Q== X-Received: by 2002:a17:90a:fa5:b0:24d:f992:5286 with SMTP id 34-20020a17090a0fa500b0024df9925286mr23559697pjz.36.1684169968549; Mon, 15 May 2023 09:59:28 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:28 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 03/14] x86/sev: Add AMD sev-snp enlightened guest support on hyperv Date: Mon, 15 May 2023 12:59:05 -0400 Message-Id: <20230515165917.1306922-4-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Enable #HV exception to handle interrupt requests from hypervisor. Co-developed-by: Lendacky Thomas Co-developed-by: Kalra Ashish Signed-off-by: Tianyu Lan --- Change since RFC V5: * Merge patch "x86/sev: Fix interrupt exit code paths from #HV exception" with this commit. Change since RFC V3: * Check NMI event when irq is disabled. * Remove redundant variable --- arch/x86/include/asm/idtentry.h | 12 +- arch/x86/include/asm/mem_encrypt.h | 2 + arch/x86/include/uapi/asm/svm.h | 4 + arch/x86/kernel/sev.c | 349 ++++++++++++++++++++++++----- arch/x86/kernel/traps.c | 2 + 5 files changed, 310 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index b0f3501b2767..867073ccf1d1 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -13,6 +13,12 @@ #include +#ifdef CONFIG_AMD_MEM_ENCRYPT +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state); +#else +#define irqentry_exit_hv_cond(regs, state) irqentry_exit(regs, state) +#endif + /** * DECLARE_IDTENTRY - Declare functions for simple IDT entry points * No error code pushed by hardware @@ -201,7 +207,7 @@ __visible noinstr void func(struct pt_regs *regs, \ kvm_set_cpu_l1tf_flush_l1d(); \ run_irq_on_irqstack_cond(__##func, regs, vector); \ instrumentation_end(); \ - irqentry_exit(regs, state); \ + irqentry_exit_hv_cond(regs, state); \ } \ \ static noinline void __##func(struct pt_regs *regs, u32 vector) @@ -241,7 +247,7 @@ __visible noinstr void func(struct pt_regs *regs) \ kvm_set_cpu_l1tf_flush_l1d(); \ run_sysvec_on_irqstack_cond(__##func, regs); \ instrumentation_end(); \ - irqentry_exit(regs, state); \ + irqentry_exit_hv_cond(regs, state); \ } \ \ static noinline void __##func(struct pt_regs *regs) @@ -270,7 +276,7 @@ __visible noinstr void func(struct pt_regs *regs) \ __##func (regs); \ __irq_exit_raw(); \ instrumentation_end(); \ - irqentry_exit(regs, state); \ + irqentry_exit_hv_cond(regs, state); \ } \ \ static __always_inline void __##func(struct pt_regs *regs) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index b7126701574c..9299caeca69f 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,6 +50,7 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, void __init mem_encrypt_free_decrypted_mem(void); void __init sev_es_init_vc_handling(void); +void __init sev_snp_init_hv_handling(void); #define __bss_decrypted __section(".bss..decrypted") @@ -73,6 +74,7 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { } static inline void sev_es_init_vc_handling(void) { } +static inline void sev_snp_init_hv_handling(void) { } static inline int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; } diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 80e1df482337..828d624a38cf 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -115,6 +115,10 @@ #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 #define SVM_VMGEXIT_AP_CREATE 1 #define SVM_VMGEXIT_AP_DESTROY 2 +#define SVM_VMGEXIT_HV_DOORBELL_PAGE 0x80000014 +#define SVM_VMGEXIT_GET_PREFERRED_HV_DOORBELL_PAGE 0 +#define SVM_VMGEXIT_SET_HV_DOORBELL_PAGE 1 +#define SVM_VMGEXIT_QUERY_HV_DOORBELL_PAGE 2 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index ff5eab48bfe2..400ca555bd48 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -124,6 +124,165 @@ struct sev_config { static struct sev_config sev_cfg __read_mostly; +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state); +static noinstr void __sev_put_ghcb(struct ghcb_state *state); +static int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa); +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb); + +union hv_pending_events { + u16 events; + struct { + u8 vector; + u8 nmi : 1; + u8 mc : 1; + u8 reserved1 : 5; + u8 no_further_signal : 1; + }; +}; + +struct sev_hv_doorbell_page { + union hv_pending_events pending_events; + u8 no_eoi_required; + u8 reserved2[61]; + u8 padding[4032]; +}; + +struct sev_snp_runtime_data { + struct sev_hv_doorbell_page hv_doorbell_page; + /* + * Indication that we are currently handling #HV events. + */ + bool hv_handling_events; +}; + +static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); + +static inline u64 sev_es_rd_ghcb_msr(void) +{ + return __rdmsr(MSR_AMD64_SEV_ES_GHCB); +} + +static __always_inline void sev_es_wr_ghcb_msr(u64 val) +{ + u32 low, high; + + low = (u32)(val); + high = (u32)(val >> 32); + + native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); +} + +struct sev_hv_doorbell_page *sev_snp_current_doorbell_page(void) +{ + return &this_cpu_read(snp_runtime_data)->hv_doorbell_page; +} + +static u8 sev_hv_pending(void) +{ + return sev_snp_current_doorbell_page()->pending_events.events; +} + +#define sev_hv_pending_nmi \ + sev_snp_current_doorbell_page()->pending_events.nmi + +static void hv_doorbell_apic_eoi_write(u32 reg, u32 val) +{ + if (xchg(&sev_snp_current_doorbell_page()->no_eoi_required, 0) & 0x1) + return; + + BUG_ON(reg != APIC_EOI); + apic->write(reg, val); +} + +static void do_exc_hv(struct pt_regs *regs) +{ + union hv_pending_events pending_events; + + /* Avoid nested entry. */ + if (this_cpu_read(snp_runtime_data)->hv_handling_events) + return; + + this_cpu_read(snp_runtime_data)->hv_handling_events = true; + + while (sev_hv_pending()) { + pending_events.events = xchg( + &sev_snp_current_doorbell_page()->pending_events.events, + 0); + + if (pending_events.nmi) + exc_nmi(regs); + +#ifdef CONFIG_X86_MCE + if (pending_events.mc) + exc_machine_check(regs); +#endif + + if (!pending_events.vector) + goto out; + + if (pending_events.vector < FIRST_EXTERNAL_VECTOR) { + /* Exception vectors */ + WARN(1, "exception shouldn't happen\n"); + } else if (pending_events.vector == FIRST_EXTERNAL_VECTOR) { + sysvec_irq_move_cleanup(regs); + } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { + WARN(1, "syscall shouldn't happen\n"); + } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { + switch (pending_events.vector) { +#if IS_ENABLED(CONFIG_HYPERV) + case HYPERV_STIMER0_VECTOR: + sysvec_hyperv_stimer0(regs); + break; + case HYPERVISOR_CALLBACK_VECTOR: + sysvec_hyperv_callback(regs); + break; +#endif +#ifdef CONFIG_SMP + case RESCHEDULE_VECTOR: + sysvec_reschedule_ipi(regs); + break; + case IRQ_MOVE_CLEANUP_VECTOR: + sysvec_irq_move_cleanup(regs); + break; + case REBOOT_VECTOR: + sysvec_reboot(regs); + break; + case CALL_FUNCTION_SINGLE_VECTOR: + sysvec_call_function_single(regs); + break; + case CALL_FUNCTION_VECTOR: + sysvec_call_function(regs); + break; +#endif +#ifdef CONFIG_X86_LOCAL_APIC + case ERROR_APIC_VECTOR: + sysvec_error_interrupt(regs); + break; + case SPURIOUS_APIC_VECTOR: + sysvec_spurious_apic_interrupt(regs); + break; + case LOCAL_TIMER_VECTOR: + sysvec_apic_timer_interrupt(regs); + break; + case X86_PLATFORM_IPI_VECTOR: + sysvec_x86_platform_ipi(regs); + break; +#endif + case 0x0: + break; + default: + panic("Unexpected vector %d\n", vector); + unreachable(); + } + } else { + common_interrupt(regs, pending_events.vector); + } + } + +out: + this_cpu_read(snp_runtime_data)->hv_handling_events = false; +} + static __always_inline bool on_vc_stack(struct pt_regs *regs) { unsigned long sp = regs->sp; @@ -181,18 +340,19 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } -static void do_exc_hv(struct pt_regs *regs) -{ - /* Handle #HV exception. */ -} - void check_hv_pending(struct pt_regs *regs) { if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; - if ((regs->flags & X86_EFLAGS_IF) == 0) + /* Handle NMI when irq is disabled. */ + if ((regs->flags & X86_EFLAGS_IF) == 0) { + if (sev_hv_pending_nmi) { + exc_nmi(regs); + sev_hv_pending_nmi = 0; + } return; + } do_exc_hv(regs); } @@ -233,68 +393,35 @@ void noinstr __sev_es_ist_exit(void) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist); } -/* - * Nothing shall interrupt this code path while holding the per-CPU - * GHCB. The backup GHCB is only for NMIs interrupting this path. - * - * Callers must disable local interrupts around it. - */ -static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +static bool sev_restricted_injection_enabled(void) +{ + return sev_status & MSR_AMD64_SNP_RESTRICTED_INJ; +} + +void __init sev_snp_init_hv_handling(void) { struct sev_es_runtime_data *data; + struct ghcb_state state; struct ghcb *ghcb; + unsigned long flags; WARN_ON(!irqs_disabled()); + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP) || !sev_restricted_injection_enabled()) + return; data = this_cpu_read(runtime_data); - ghcb = &data->ghcb_page; - - if (unlikely(data->ghcb_active)) { - /* GHCB is already in use - save its contents */ - - if (unlikely(data->backup_ghcb_active)) { - /* - * Backup-GHCB is also already in use. There is no way - * to continue here so just kill the machine. To make - * panic() work, mark GHCBs inactive so that messages - * can be printed out. - */ - data->ghcb_active = false; - data->backup_ghcb_active = false; - - instrumentation_begin(); - panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); - instrumentation_end(); - } - - /* Mark backup_ghcb active before writing to it */ - data->backup_ghcb_active = true; - state->ghcb = &data->backup_ghcb; + local_irq_save(flags); - /* Backup GHCB content */ - *state->ghcb = *ghcb; - } else { - state->ghcb = NULL; - data->ghcb_active = true; - } + ghcb = __sev_get_ghcb(&state); - return ghcb; -} + sev_snp_setup_hv_doorbell_page(ghcb); -static inline u64 sev_es_rd_ghcb_msr(void) -{ - return __rdmsr(MSR_AMD64_SEV_ES_GHCB); -} - -static __always_inline void sev_es_wr_ghcb_msr(u64 val) -{ - u32 low, high; + __sev_put_ghcb(&state); - low = (u32)(val); - high = (u32)(val >> 32); + apic_set_eoi_write(hv_doorbell_apic_eoi_write); - native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); + local_irq_restore(flags); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, @@ -555,6 +682,69 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt /* Include code shared with pre-decompression boot stage */ #include "sev-shared.c" +/* + * Nothing shall interrupt this code path while holding the per-CPU + * GHCB. The backup GHCB is only for NMIs interrupting this path. + * + * Callers must disable local interrupts around it. + */ +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +{ + struct sev_es_runtime_data *data; + struct ghcb *ghcb; + + WARN_ON(!irqs_disabled()); + + data = this_cpu_read(runtime_data); + ghcb = &data->ghcb_page; + + if (unlikely(data->ghcb_active)) { + /* GHCB is already in use - save its contents */ + + if (unlikely(data->backup_ghcb_active)) { + /* + * Backup-GHCB is also already in use. There is no way + * to continue here so just kill the machine. To make + * panic() work, mark GHCBs inactive so that messages + * can be printed out. + */ + data->ghcb_active = false; + data->backup_ghcb_active = false; + + instrumentation_begin(); + panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); + instrumentation_end(); + } + + /* Mark backup_ghcb active before writing to it */ + data->backup_ghcb_active = true; + + state->ghcb = &data->backup_ghcb; + + /* Backup GHCB content */ + *state->ghcb = *ghcb; + } else { + state->ghcb = NULL; + data->ghcb_active = true; + } + + return ghcb; +} + +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb) +{ + u64 pa; + enum es_result ret; + + pa = __pa(sev_snp_current_doorbell_page()); + vc_ghcb_invalidate(ghcb); + ret = vmgexit_hv_doorbell_page(ghcb, + SVM_VMGEXIT_SET_HV_DOORBELL_PAGE, + pa); + if (ret != ES_OK) + panic("SEV-SNP: failed to set up #HV doorbell page"); +} + static noinstr void __sev_put_ghcb(struct ghcb_state *state) { struct sev_es_runtime_data *data; @@ -1283,6 +1473,7 @@ static void snp_register_per_cpu_ghcb(void) ghcb = &data->ghcb_page; snp_register_ghcb_early(__pa(ghcb)); + sev_snp_setup_hv_doorbell_page(ghcb); } void setup_ghcb(void) @@ -1322,6 +1513,11 @@ void setup_ghcb(void) snp_register_ghcb_early(__pa(&boot_ghcb_page)); } +int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa) +{ + return sev_es_ghcb_hv_call(ghcb, NULL, SVM_VMGEXIT_HV_DOORBELL_PAGE, op, pa); +} + #ifdef CONFIG_HOTPLUG_CPU static void sev_es_ap_hlt_loop(void) { @@ -1395,6 +1591,7 @@ static void __init alloc_runtime_data(int cpu) static void __init init_ghcb(int cpu) { struct sev_es_runtime_data *data; + struct sev_snp_runtime_data *snp_data; int err; data = per_cpu(runtime_data, cpu); @@ -1406,6 +1603,19 @@ static void __init init_ghcb(int cpu) memset(&data->ghcb_page, 0, sizeof(data->ghcb_page)); + snp_data = memblock_alloc(sizeof(*snp_data), PAGE_SIZE); + if (!snp_data) + panic("Can't allocate SEV-SNP runtime data"); + + err = early_set_memory_decrypted((unsigned long)&snp_data->hv_doorbell_page, + sizeof(snp_data->hv_doorbell_page)); + if (err) + panic("Can't map #HV doorbell pages unencrypted"); + + memset(&snp_data->hv_doorbell_page, 0, sizeof(snp_data->hv_doorbell_page)); + + per_cpu(snp_runtime_data, cpu) = snp_data; + data->ghcb_active = false; data->backup_ghcb_active = false; } @@ -2046,7 +2256,12 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) static bool hv_raw_handle_exception(struct pt_regs *regs) { - return false; + /* Clear the no_further_signal bit */ + sev_snp_current_doorbell_page()->pending_events.events &= 0x7fff; + + check_hv_pending(regs); + + return true; } static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) @@ -2360,3 +2575,25 @@ static int __init snp_init_platform_device(void) return 0; } device_initcall(snp_init_platform_device); + +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state) +{ + /* + * Check whether this returns to user mode, if so and if + * we are currently executing the #HV handler then we don't + * want to follow the irqentry_exit_to_user_mode path as + * that can potentially cause the #HV handler to be + * preempted and rescheduled on another CPU. Rescheduled #HV + * handler on another cpu will cause interrupts to be handled + * on a different cpu than the injected one, causing + * invalid EOIs and missed/lost guest interrupts and + * corresponding hangs and/or per-cpu IRQs handled on + * non-intended cpu. + */ + if (user_mode(regs) && + this_cpu_read(snp_runtime_data)->hv_handling_events) + return; + + /* follow normal interrupt return/exit path */ + irqentry_exit(regs, state); +} diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 5dca05d0fa38..4f0f8dd2a5cb 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -1521,5 +1521,7 @@ void __init trap_init(void) cpu_init_exception_handling(); /* Setup traps as cpu_init() might #GP */ idt_setup_traps(); + sev_snp_init_hv_handling(); + cpu_init(); } From patchwork Mon May 15 16:59:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 012BEC7EE23 for ; Mon, 15 May 2023 17:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243373AbjEOQ77 (ORCPT ); Mon, 15 May 2023 12:59:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243361AbjEOQ7h (ORCPT ); Mon, 15 May 2023 12:59:37 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3C397D93; Mon, 15 May 2023 09:59:30 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-24df4ecdb87so8736421a91.0; Mon, 15 May 2023 09:59:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169970; x=1686761970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vDqTF/fOVqtE1U4Nb2g4gjHAczzXhE+BziL/j2n+Nm0=; b=WJUgKGDZATb/N/lGfCNGID1DSBcd9WG5Gy/ITkBHMxAbs3QDkNpX9EauUpGd638a0g QraBGA1qxNQ56lyjw1yx40vsskbmdTIwtVOWmiA2HgJMGuqJBpc2A+U22f0XtjZAjLSD d9Xy8tyv3w+e41tIj40TKqWNcH+moar0e30O99oqMJPV5w5DEOuOhjtI/AqninDjFODQ X2F8UrNZzNKZEdM4QxOuvh919iRhlcwYrOUuDBZHFsF6EFN7/1WPSJV28DN7/aKTMv31 b/VjejhXQ2w1lwkDZ6BqybeBC/614fEtSYblnzmWtTws/YnFeacXR9/BksbFEnVPzZw9 l6JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169970; x=1686761970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vDqTF/fOVqtE1U4Nb2g4gjHAczzXhE+BziL/j2n+Nm0=; b=cWiAfLZ96yhVe2Sk/K8bA7qyQRYiWLFGmZoqt9bESuF5SPM6ofIWYZxgkmULziHX3i zUE0/gk+phrtRxlHXR91Kx+Umoxcwhs0d59V7M12MhD8yWXLyP+UTso4NhbVCx+dGowH wYnMlL1HbOTnNDy10HNn/oZE4vezDwKd0fTyuTjr8uRt6sorQqTh5IijX7LG86jPXl0w SA+l37VrsHw1+PLw4gIbVnrUK4TA4+L+XkfqK1kojRQmku4ELsro1qfjguM3p+xPnAIS iY7Ch5hdaOthkubuHLXSsy7mQoRezCG418Tkyw/6AzI04ZFMAX6rhxGSrhwXZXViHZ9R EtpQ== X-Gm-Message-State: AC+VfDy4FJ7VmtGSilto4VhVkCiIRc2kOQ9vIJTtrWaGsez1pgBsYy8l 1Aijy4KD4eBaYi2hVrapSGw= X-Google-Smtp-Source: ACHHUZ4vo7Esr56dS1nxDrgMzjavv2V8cUANRiRo68X6dg8bP6ZM+3vwN4GbiUrMLco7ms59b1l/RA== X-Received: by 2002:a17:90b:4a51:b0:247:5654:fcf3 with SMTP id lb17-20020a17090b4a5100b002475654fcf3mr33965931pjb.11.1684169970097; Mon, 15 May 2023 09:59:30 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:29 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 04/14] x86/sev: optimize system vector processing invoked from #HV exception Date: Mon, 15 May 2023 12:59:06 -0400 Message-Id: <20230515165917.1306922-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ashish Kalra Construct system vector table and dispatch system vector exceptions through sysvec_table from #HV exception handler instead of explicitly calling each system vector. The system vector table is created dynamically and is placed in a new named ELF section. Signed-off-by: Ashish Kalra --- arch/x86/entry/entry_64.S | 6 +++ arch/x86/kernel/sev.c | 70 +++++++++++++---------------------- arch/x86/kernel/vmlinux.lds.S | 7 ++++ 3 files changed, 38 insertions(+), 45 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 147b850babf6..f86b319d0a9e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -419,6 +419,12 @@ SYM_CODE_START(\asmsym) _ASM_NOKPROBE(\asmsym) SYM_CODE_END(\asmsym) + .if \vector >= FIRST_SYSTEM_VECTOR && \vector < NR_VECTORS + .section .system_vectors, "aw" + .byte \vector + .quad \cfunc + .previous + .endif .endm /* diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 400ca555bd48..ac3d758670b3 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -157,6 +157,16 @@ struct sev_snp_runtime_data { static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); +static void (*sysvec_table[NR_VECTORS - FIRST_SYSTEM_VECTOR]) + (struct pt_regs *regs) __ro_after_init; + +struct sysvec_entry { + unsigned char vector; + void (*sysvec_func)(struct pt_regs *regs); +} __packed; + +extern struct sysvec_entry __system_vectors[], __system_vectors_end[]; + static inline u64 sev_es_rd_ghcb_msr(void) { return __rdmsr(MSR_AMD64_SEV_ES_GHCB); @@ -228,51 +238,11 @@ static void do_exc_hv(struct pt_regs *regs) } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { WARN(1, "syscall shouldn't happen\n"); } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { - switch (pending_events.vector) { -#if IS_ENABLED(CONFIG_HYPERV) - case HYPERV_STIMER0_VECTOR: - sysvec_hyperv_stimer0(regs); - break; - case HYPERVISOR_CALLBACK_VECTOR: - sysvec_hyperv_callback(regs); - break; -#endif -#ifdef CONFIG_SMP - case RESCHEDULE_VECTOR: - sysvec_reschedule_ipi(regs); - break; - case IRQ_MOVE_CLEANUP_VECTOR: - sysvec_irq_move_cleanup(regs); - break; - case REBOOT_VECTOR: - sysvec_reboot(regs); - break; - case CALL_FUNCTION_SINGLE_VECTOR: - sysvec_call_function_single(regs); - break; - case CALL_FUNCTION_VECTOR: - sysvec_call_function(regs); - break; -#endif -#ifdef CONFIG_X86_LOCAL_APIC - case ERROR_APIC_VECTOR: - sysvec_error_interrupt(regs); - break; - case SPURIOUS_APIC_VECTOR: - sysvec_spurious_apic_interrupt(regs); - break; - case LOCAL_TIMER_VECTOR: - sysvec_apic_timer_interrupt(regs); - break; - case X86_PLATFORM_IPI_VECTOR: - sysvec_x86_platform_ipi(regs); - break; -#endif - case 0x0: - break; - default: - panic("Unexpected vector %d\n", vector); - unreachable(); + if (!(sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])) { + WARN(1, "system vector entry 0x%x is NULL\n", + pending_events.vector); + } else { + (*sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])(regs); } } else { common_interrupt(regs, pending_events.vector); @@ -398,6 +368,14 @@ static bool sev_restricted_injection_enabled(void) return sev_status & MSR_AMD64_SNP_RESTRICTED_INJ; } +static void __init construct_sysvec_table(void) +{ + struct sysvec_entry *p; + + for (p = __system_vectors; p < __system_vectors_end; p++) + sysvec_table[p->vector - FIRST_SYSTEM_VECTOR] = p->sysvec_func; +} + void __init sev_snp_init_hv_handling(void) { struct sev_es_runtime_data *data; @@ -422,6 +400,8 @@ void __init sev_snp_init_hv_handling(void) apic_set_eoi_write(hv_doorbell_apic_eoi_write); local_irq_restore(flags); + + construct_sysvec_table(); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 25f155205770..c37165d8e877 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -338,6 +338,13 @@ SECTIONS *(.altinstr_replacement) } + . = ALIGN(8); + .system_vectors : AT(ADDR(.system_vectors) - LOAD_OFFSET) { + __system_vectors = .; + *(.system_vectors) + __system_vectors_end = .; + } + . = ALIGN(8); .apicdrivers : AT(ADDR(.apicdrivers) - LOAD_OFFSET) { __apicdrivers = .; From patchwork Mon May 15 16:59:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F6BC7EE43 for ; Mon, 15 May 2023 17:00:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243380AbjEORAJ (ORCPT ); Mon, 15 May 2023 13:00:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243313AbjEOQ7r (ORCPT ); Mon, 15 May 2023 12:59:47 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98AC37DB3; Mon, 15 May 2023 09:59:32 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 41be03b00d2f7-53063897412so5658778a12.0; Mon, 15 May 2023 09:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169972; x=1686761972; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Udm3eUyVQh3+nWxooTJ9GWCwKBxGdrkWeoFcDQJr1CA=; b=j/gJTub3WT+Foh2oK+Hko0G9Khx0/+vYADnKziVUjccvf5NxD6O7P1qKdcEzO8w7RC IzaoSdmSz0Yyz8IEaWXbsodswJrWCpGIa6etcSJQj0QHTqIz7dZEP7ula9eJFLg6fBOC +yzxGTNMWwfF+fo6pHz2AusZ9Y1QS4XO704ZXGU3aSTpRRNWsjfPnc4Uqp25QZmiy0Y+ mSUnanOlh8TQHuh5PzD2vSRD1wqcwvsxo/7X3F6UbuqZUfcT86+MfXbrwKD6VfLvz/cH HQaX6X8e2FZ4OpRLFFGu/2TmUv/XLXRF6AaACVjS521OpKl0SYS8N29KCoi2yvQAJXA5 o92A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169972; x=1686761972; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Udm3eUyVQh3+nWxooTJ9GWCwKBxGdrkWeoFcDQJr1CA=; b=ex1dmTCYHg0LTNB0p68ZrOheNW4amj6F3bwVtUI0Tzi/ggQiu6/aT9Dzku1he4VD+e h1l8Mqxhq1vre7/bcYJyid+SVCqoUzM7Qq1n22zX3p7Ico+LWa0vNue9lKnVbHFkGOjm 09fsDcCGbGCaSoKZASKP03qrCai5h+pR7m1zQK4VYq6OKMfLqATixtEoyN6XTLc7BXUF pze1AuCNB3MnDqaIRfwAhgueFlab5Hfo/75I5fmQKLHpewDBWtxwtmzsNPDKcVsmcmZ5 zsXyxvERGPJ/VHnhF5rtXeJH4kecXIVkC9WCfmnyDjq2GDh8i59c9wbRtdr53X1gpB0S NyXA== X-Gm-Message-State: AC+VfDxFsLii2wUiWxneA72w+Obycu7IqzslrhiUzWfJIkq54T318Avv pfAa9V2PxwQi1MQhg/LfHOc= X-Google-Smtp-Source: ACHHUZ4g2M85AFC6YrmyN2QnhkZ5GDP+O4YYm0papKhHRwH45XKLuqqcN7oTtSkUSHJHhRR5GjOqTQ== X-Received: by 2002:a17:90b:1d0d:b0:24d:ea7f:9ea2 with SMTP id on13-20020a17090b1d0d00b0024dea7f9ea2mr35127191pjb.15.1684169972086; Mon, 15 May 2023 09:59:32 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:31 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 05/14] x86/hyperv: Add sev-snp enlightened guest static key Date: Mon, 15 May 2023 12:59:07 -0400 Message-Id: <20230515165917.1306922-6-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Introduce static key isolation_type_en_snp for enlightened sev-snp guest check. Signed-off-by: Tianyu Lan --- Change since RFC-v3: * Remove some Hyper-V specific config setting --- arch/x86/hyperv/ivm.c | 11 +++++++++++ arch/x86/include/asm/mshyperv.h | 3 +++ arch/x86/kernel/cpu/mshyperv.c | 9 +++++++-- drivers/hv/hv_common.c | 6 ++++++ 4 files changed, 27 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 127d5b7b63de..368b2731950e 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -409,3 +409,14 @@ bool hv_isolation_type_snp(void) { return static_branch_unlikely(&isolation_type_snp); } + +DEFINE_STATIC_KEY_FALSE(isolation_type_en_snp); +/* + * hv_isolation_type_en_snp - Check system runs in the AMD SEV-SNP based + * isolation enlightened VM. + */ +bool hv_isolation_type_en_snp(void) +{ + return static_branch_unlikely(&isolation_type_en_snp); +} + diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index b445e252aa83..97d117ec95c4 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -26,6 +26,7 @@ union hv_ghcb; DECLARE_STATIC_KEY_FALSE(isolation_type_snp); +DECLARE_STATIC_KEY_FALSE(isolation_type_en_snp); typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, @@ -45,6 +46,8 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; +extern bool hv_isolation_type_en_snp(void); + extern union hv_ghcb * __percpu *hv_ghcb_pg; int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index c7969e806c64..63a2bfbfe701 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -402,8 +402,12 @@ static void __init ms_hyperv_init_platform(void) pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); - if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) + + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) { + static_branch_enable(&isolation_type_en_snp); + } else if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) { static_branch_enable(&isolation_type_snp); + } } if (hv_max_functions_eax >= HYPERV_CPUID_NESTED_FEATURES) { @@ -473,7 +477,8 @@ static void __init ms_hyperv_init_platform(void) #if IS_ENABLED(CONFIG_HYPERV) if ((hv_get_isolation_type() == HV_ISOLATION_TYPE_VBS) || - (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP)) + (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP + && !cc_platform_has(CC_ATTR_GUEST_SEV_SNP))) hv_vtom_init(); /* * Setup the hook to get control post apic initialization. diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 64f9ceca887b..179bc5f5bf52 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -502,6 +502,12 @@ bool __weak hv_isolation_type_snp(void) } EXPORT_SYMBOL_GPL(hv_isolation_type_snp); +bool __weak hv_isolation_type_en_snp(void) +{ + return false; +} +EXPORT_SYMBOL_GPL(hv_isolation_type_en_snp); + void __weak hv_setup_vmbus_handler(void (*handler)(void)) { } From patchwork Mon May 15 16:59:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F740C7EE2E for ; Mon, 15 May 2023 17:00:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243496AbjEORAI (ORCPT ); Mon, 15 May 2023 13:00:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243377AbjEOQ7s (ORCPT ); Mon, 15 May 2023 12:59:48 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FC637AA4; Mon, 15 May 2023 09:59:34 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id 98e67ed59e1d1-24e1d272b09so9579552a91.1; Mon, 15 May 2023 09:59:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169973; x=1686761973; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xj1rg805VeXOzoRW2P320xFoUthoXwFzXIHBhX44V0E=; b=VTKVHLjOWHvcasfBNjghCkChlsRhIg/JX5bBH2I3XS7KpClIu2unj1wv5TWYaBarOn zipMJC0Zb4GH6fdn7Pdqaho0gd+5rSMtNoUiz+R5UvSdH0c3839ZpE0IP2AdGuJe1ssj SLWUUgA9TrOfRVZylWtSdzMaDIRvmQUp+kyAQa9BBKzJ19qY+cbPjdKYoo0gHg+45e/A lSt7AO4bbjL1TATAAkzsseIdt6ce9FrK6SyQg06vWnvXxTnyoWXuRrpnm9rRpxnP09mv B7vDBhi+JSYZEMhxPZdu6uVTOm1uKGOLA6VK4gVXqQmp36fJFxyBgVw/DHECF3opeAea D/tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169973; x=1686761973; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xj1rg805VeXOzoRW2P320xFoUthoXwFzXIHBhX44V0E=; b=fBm8MZOc9PfFUdpwgQZQJ5WdjJcahb3ph1XCCfcfD4KkBfnMycqqMbVGIOyyo1uk+e EMtMxkJRTp3THfCSlMvCIXNuObXSGXwp81+/Hbx8jCVsF8qAQ0rJDLlILbTbinHOYpaD BhZCXwF+uslWTCuxSI6E+LwEQm/qaCjORl76LBOirdgHq3A2tW0qb2auAaI8pGMbxkTr rgWzAOc5tI594Ka2l63GsRjkh518stu7UPStnzU2d2nIyeamp4ZAe3/p7KPXpYbNQKQY WAyPengfmRaVKmJzXKMf0QtZTXzItiFJQzQeXECILUojF29OJW6zGvD1ktwqUGxwuxJh /zBg== X-Gm-Message-State: AC+VfDxsbBxAR+bzzzgfYlWV1Vk1ooU6nyqxjUekgFfTDEQSucs7vAVE +QF9ttRg71Cw/KQs+aAOYBg= X-Google-Smtp-Source: ACHHUZ43s6yfVrKt4NwTH4gtV3IDP9GaE1j4t6MKMEuL+TwJlQbJVog7ei/qZxfmZvmvHcYVpL2t1g== X-Received: by 2002:a17:90a:6046:b0:252:89e6:dea9 with SMTP id h6-20020a17090a604600b0025289e6dea9mr15910674pjm.37.1684169973595; Mon, 15 May 2023 09:59:33 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:33 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 06/14] x86/hyperv: Mark Hyper-V vp assist page unencrypted in SEV-SNP enlightened guest Date: Mon, 15 May 2023 12:59:08 -0400 Message-Id: <20230515165917.1306922-7-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan hv vp assist page needs to be shared between SEV-SNP guest and Hyper-V. So mark the page unencrypted in the SEV-SNP guest. Signed-off-by: Tianyu Lan --- arch/x86/hyperv/hv_init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a5f9474f08e1..9f3e2d71d015 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -113,6 +114,11 @@ static int hv_cpu_init(unsigned int cpu) } if (!WARN_ON(!(*hvp))) { + if (hv_isolation_type_en_snp()) { + WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1)); + memset(*hvp, 0, PAGE_SIZE); + } + msr.enable = 1; wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64); } From patchwork Mon May 15 16:59:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EEBC7EE30 for ; Mon, 15 May 2023 17:00:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243418AbjEORAD (ORCPT ); Mon, 15 May 2023 13:00:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243382AbjEOQ7s (ORCPT ); Mon, 15 May 2023 12:59:48 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2D027D9B; Mon, 15 May 2023 09:59:35 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-64384c6797eso10314891b3a.2; Mon, 15 May 2023 09:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169975; x=1686761975; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6uYxV2dI/hb2rcrfooNSE6OrsK+/8Zb/z3h9orwCels=; b=sveMjOYCv5+ActaKa060kohe1gtBee5UJUUVy/vInHIlAtriFvKgCOm1at6ceBpoWe UYd+Vm31dwBRc0YHZpR0pDCYWun8HbQyJPX7A3ANDFs07xg76iqEWR+szi9iO6vZHW7x SLrYzBlVZKeStBnwEPPVmr0T6jWV3oaHj+ZIYpVq2xc2PX73eyWbHgWpFiBR4miGap+N ClAmjSQpWh644m9RVR+P8CF0owDm0BXstqKLIguY+SPwD9rvHgJEkojId5hki4BbKxJg 002YRZKPwbFTSzKnKt9JkjirvEIhtN953Fm+L7O2g41INW2hNA7FkLaYrySFdIYrwTRG jhoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169975; x=1686761975; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6uYxV2dI/hb2rcrfooNSE6OrsK+/8Zb/z3h9orwCels=; b=bzdsdlrbhJUMf06FFsF0sGDhKHGBBVeyq8Jar03zC9Z0ewaUHfJH97j1G6R+IBeMp9 S1u0GefVEH++bkJIQVi856lUPAI2UExHqVQ+8f7eRBo2zGcEQ1PlZZat8Zm6TtMToF+d q/SOQa6qXhYPEEadXUsi0iDvnLMGZUtLJB0z+G12I7YlOLIrftElIBEBOeMS1wTrx5Sl rCP87qb9UdnbhQyDSbcIUhIkva9qIyFWdoyxhktphH01Uwwej9sBLIinlkTpBzgZSo+l vpIMkwSrNeAPaP0QUpcqMy+bPtzqLdJ4Qr0+0vfkwEbSD0bOJIJG9gGQHuYevG/jcxNo NSJQ== X-Gm-Message-State: AC+VfDwUw6123RPlpl2sV26QYVzknCuPUET8HRUEbmZGtFNl+LfkMH9m hksRhznmK5QeKCZ23uDO490= X-Google-Smtp-Source: ACHHUZ6pdUEbs/YMXZvqE+sQDO8UVxJicWHFwcjA56LcvBg0u1q4qaA/X8SNrIYZSi/XUMfrfvoRuQ== X-Received: by 2002:a05:6a20:12c4:b0:101:64b0:e69c with SMTP id v4-20020a056a2012c400b0010164b0e69cmr29882923pzg.29.1684169975195; Mon, 15 May 2023 09:59:35 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:34 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 07/14] x86/hyperv: Set Virtual Trust Level in VMBus init message Date: Mon, 15 May 2023 12:59:09 -0400 Message-Id: <20230515165917.1306922-8-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan sev-snp guest provides vtl(Virtual Trust Level) and get it from hyperv hvcall via HVCALL_GET_VP_REGISTERS. Set target vtl in the VMBus init message. Signed-off-by: Tianyu Lan --- Change since RFC v4: * Use struct_size to calculate array size. * Fix some coding style Change since RFC v3: * Use the standard helper functions to check hypercall result * Fix coding style Change since RFC v2: * Rename get_current_vtl() to get_vtl() * Fix some coding style issues --- arch/x86/hyperv/hv_init.c | 36 ++++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++ drivers/hv/connection.c | 1 + include/asm-generic/mshyperv.h | 1 + include/linux/hyperv.h | 4 ++-- 5 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 9f3e2d71d015..331b855314b7 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -384,6 +384,40 @@ static void __init hv_get_partition_id(void) local_irq_restore(flags); } +static u8 __init get_vtl(void) +{ + u64 control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_REGISTERS; + struct hv_get_vp_registers_input *input; + struct hv_get_vp_registers_output *output; + u64 vtl = 0; + u64 ret; + unsigned long flags; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + output = (struct hv_get_vp_registers_output *)input; + if (!input) { + local_irq_restore(flags); + goto done; + } + + memset(input, 0, struct_size(input, element, 1)); + input->header.partitionid = HV_PARTITION_ID_SELF; + input->header.vpindex = HV_VP_INDEX_SELF; + input->header.inputvtl = 0; + input->element[0].name0 = HV_X64_REGISTER_VSM_VP_STATUS; + + ret = hv_do_hypercall(control, input, output); + if (hv_result_success(ret)) + vtl = output->as64.low & HV_X64_VTL_MASK; + else + pr_err("Hyper-V: failed to get VTL! %lld", ret); + local_irq_restore(flags); + +done: + return vtl; +} + /* * This function is to be invoked early in the boot sequence after the * hypervisor has been detected. @@ -512,6 +546,8 @@ void __init hyperv_init(void) /* Query the VMs extended capability once, so that it can be cached. */ hv_query_ext_cap(0); + /* Find the VTL */ + ms_hyperv.vtl = get_vtl(); return; clean_guest_os_id: diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index cea95dcd27c2..4bf0b315b0ce 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -301,6 +301,13 @@ enum hv_isolation_type { #define HV_X64_MSR_TIME_REF_COUNT HV_REGISTER_TIME_REF_COUNT #define HV_X64_MSR_REFERENCE_TSC HV_REGISTER_REFERENCE_TSC +/* + * Registers are only accessible via HVCALL_GET_VP_REGISTERS hvcall and + * there is not associated MSR address. + */ +#define HV_X64_REGISTER_VSM_VP_STATUS 0x000D0003 +#define HV_X64_VTL_MASK GENMASK(3, 0) + /* Hyper-V memory host visibility */ enum hv_mem_host_visibility { VMBUS_PAGE_NOT_VISIBLE = 0, diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 5978e9dbc286..02b54f85dc60 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -98,6 +98,7 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version) */ if (version >= VERSION_WIN10_V5) { msg->msg_sint = VMBUS_MESSAGE_SINT; + msg->msg_vtl = ms_hyperv.vtl; vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID_4; } else { msg->interrupt_page = virt_to_phys(vmbus_connection.int_page); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 402a8c1c202d..3052130ba4ef 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -48,6 +48,7 @@ struct ms_hyperv_info { }; }; u64 shared_gpa_boundary; + u8 vtl; }; extern struct ms_hyperv_info ms_hyperv; extern bool hv_nested; diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index bfbc37ce223b..1f2bfec4abde 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -665,8 +665,8 @@ struct vmbus_channel_initiate_contact { u64 interrupt_page; struct { u8 msg_sint; - u8 padding1[3]; - u32 padding2; + u8 msg_vtl; + u8 reserved[6]; }; }; u64 monitor_page1; From patchwork Mon May 15 16:59:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63316C7EE31 for ; Mon, 15 May 2023 17:00:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243247AbjEORAK (ORCPT ); Mon, 15 May 2023 13:00:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243391AbjEOQ7t (ORCPT ); Mon, 15 May 2023 12:59:49 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 528817DAF; Mon, 15 May 2023 09:59:37 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id 41be03b00d2f7-5304913530fso6556033a12.0; Mon, 15 May 2023 09:59:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169977; x=1686761977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fry12NLoScUn7gbAM5zwlql+LIRPvftdxXtygfKZOSU=; b=MPWw7UFBWYvp7IesQxIo5ysOJktrGYmN1TyLJFXadxqrZlruiQQeYOfaD6NZKFi8Oq 3wGuOBsR5RhIsSFuGzLjnjU6hLY0jnMdT3FUtnxfbQM1S3POdaIZLhDcuNcWxcuR2R1Y 0IjbzRGTxnwyyW7rHc6fCYr+gYB2xbtBijAw+YUyf0vBZwBuJR80Vfgi0IIbSUzzQwx+ 6DC6ruv18XbvcFxW1kc+xxLrBsCc0d/rhx4U8TQ3CWMTeseQe0yOmPMkPDrEvymJdvLY ZhGPuDye4KBzHXDdk8pfoHDc39l1AexogeYofB9l7mSAXowILiOl3uZ507KJ++uRvTnK ZgIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169977; x=1686761977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fry12NLoScUn7gbAM5zwlql+LIRPvftdxXtygfKZOSU=; b=aAQZJc50DJOlzrwjqEeJA4lqo9/LETR5XLlBTXIiX8zj+XASqq8kdXfSgKgNQEsS+O 9Fbw28Z1Np9yKVHzTLVNN3C5Qu4LvB23R1paBFDZ//X/akMmcE4rgRzCmoordnrxuA7n TyFrBTCJYIXnFTDFwZYYzIqM4yEoLqcaatw2sGu1px4gSVaACRyhhI10yvzOmXPQh2Xx PIhhTrKYFuQyKvj06ubcKwKkW/y+Ul7Xp6DJfF4xFFXrUEtFB4jjOj3w8epbvjKKnjpI p77tM3g9Ub33jmu8nxZhp6y8YjhBN875+D1FEAKZrrbErTSDX+SFWp1W6BX6TR+dCOmn MvOg== X-Gm-Message-State: AC+VfDzQBGAZE9bz0qCP4eqw5u83n+XbIqJ8sp78/7Y9gw1srmPxgxrf dWrWkcf9EGVLSA+K2TbJzZE= X-Google-Smtp-Source: ACHHUZ5MW67Nu1/GlxkmIugnLSIWBhkuESAcsU2dDSUUJT7ykEuqB1g+lWl7OMRUphfWLlft4G04kQ== X-Received: by 2002:a17:90a:94cc:b0:24e:edd:4d63 with SMTP id j12-20020a17090a94cc00b0024e0edd4d63mr33116444pjw.5.1684169976665; Mon, 15 May 2023 09:59:36 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:36 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 08/14] x86/hyperv: Use vmmcall to implement Hyper-V hypercall in sev-snp enlightened guest Date: Mon, 15 May 2023 12:59:10 -0400 Message-Id: <20230515165917.1306922-9-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan In sev-snp enlightened guest, Hyper-V hypercall needs to use vmmcall to trigger vmexit and notify hypervisor to handle hypercall request. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Fix indentation style --- arch/x86/include/asm/mshyperv.h | 44 ++++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 97d117ec95c4..939373791249 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -61,16 +61,25 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output) u64 hv_status; #ifdef CONFIG_X86_64 - if (!hv_hypercall_pg) - return U64_MAX; + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } else { + if (!hv_hypercall_pg) + return U64_MAX; - __asm__ __volatile__("mov %4, %%r8\n" - CALL_NOSPEC - : "=a" (hv_status), ASM_CALL_CONSTRAINT, - "+c" (control), "+d" (input_address) - : "r" (output_address), - THUNK_TARGET(hv_hypercall_pg) - : "cc", "memory", "r8", "r9", "r10", "r11"); + __asm__ __volatile__("mov %4, %%r8\n" + CALL_NOSPEC + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address), + THUNK_TARGET(hv_hypercall_pg) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } #else u32 input_address_hi = upper_32_bits(input_address); u32 input_address_lo = lower_32_bits(input_address); @@ -104,7 +113,13 @@ static inline u64 _hv_do_fast_hypercall8(u64 control, u64 input1) u64 hv_status; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__( + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + :: "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__(CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, "+c" (control), "+d" (input1) @@ -149,7 +164,14 @@ static inline u64 _hv_do_fast_hypercall16(u64 control, u64 input1, u64 input2) u64 hv_status; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + : "r" (input2) + : "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__("mov %4, %%r8\n" CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, From patchwork Mon May 15 16:59:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E1C6C7EE45 for ; Mon, 15 May 2023 17:00:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243476AbjEORAM (ORCPT ); Mon, 15 May 2023 13:00:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243408AbjEOQ7v (ORCPT ); Mon, 15 May 2023 12:59:51 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 052337AA6; Mon, 15 May 2023 09:59:39 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id 98e67ed59e1d1-24df4ef05d4so11071665a91.2; Mon, 15 May 2023 09:59:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169978; x=1686761978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eA3U8GDAlLCUNajUiT+ES1YkLuAsDmmQOi6ciH1zXUo=; b=lF1M1QxeQyxdmNUHBVMGvIhiHTo7OeoA5hbNZ66wvUYm0QDSFq7Mw/xNo+0FGrRwk/ BJZZit7M3ipdEovJe5qF/zLCZOTb0d5PXPUMwvnGZ7NOt19pQSMn/a2x3onDS3TlWkdY kZqU9QfIQiqjmqza+iFqXzidN34AnrAyliVl4LBxrLVB09GJfuetnzA3VfVZkFWHQdP+ 7znPbkqcOjDpXklvLDzav/YxH2yQVLpzu/Ywj1JDo6FSnuRo26oe/iQJhZ+YhEpEj6Wn WuDjisnmQbgN1/LuPOT5wx853n0qiEOsZeF2kLAvIs8BAQEEgHiU88/9nnUe2qoEm1Un 1agw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169978; x=1686761978; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eA3U8GDAlLCUNajUiT+ES1YkLuAsDmmQOi6ciH1zXUo=; b=j3O+R2Flz1HPsHLAp26ugoYwVceQ5ey989RAsvWG+gOlx/oQeDAmVSpcu5G/eqbQtF veG2prN84N7XXIqElcSmhWxfWLWuw1zcm9vrp2SjiVrll8xW7uiqB/3XLn+lIisp3pok LC+Gc1wZwZ5l5CGAUcYJjISAXaOvRSDHK1kTuoGJXIvKixFjMlr4f/+Ax8dqTaTGa2Mn CkTSICdhMiTt4KnlMX6jM2v8CbuwxvgU4QQ9iBoRjVhmyu4XiJV8wHF8+raF7O0o+gU/ RRpNWiyEyat9T2Ck7GrkWpK/Clt61Xez5tsASC20Zud2ec5wEQw19Z154KL1Oc1pu4gs lXQw== X-Gm-Message-State: AC+VfDzojFmOw7Dr4Oio9h2yjM46Zs86mJ91NQ1+y4x+rtCSagIaAm71 099DAosDc/2KUmHASEQWa/I= X-Google-Smtp-Source: ACHHUZ7kJlOT1foMFvTIkfJR4/vKWU2rCR5GnZ8R6nF8cLAaJ7PGEyviGn2reT1wXYBLMBRpVmIy4Q== X-Received: by 2002:a17:90a:a590:b0:247:eae:1787 with SMTP id b16-20020a17090aa59000b002470eae1787mr34642204pjq.36.1684169978378; Mon, 15 May 2023 09:59:38 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:37 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 09/14] clocksource/drivers/hyper-v: decrypt hyperv tsc page in sev-snp enlightened guest Date: Mon, 15 May 2023 12:59:11 -0400 Message-Id: <20230515165917.1306922-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hyper-V tsc page is shared with hypervisor and it should be decrypted in sev-snp enlightened guest when it's used. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Change the Subject line prefix --- drivers/clocksource/hyperv_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c index bcd9042a0c9f..66e29a19770b 100644 --- a/drivers/clocksource/hyperv_timer.c +++ b/drivers/clocksource/hyperv_timer.c @@ -376,7 +376,7 @@ EXPORT_SYMBOL_GPL(hv_stimer_global_cleanup); static union { struct ms_hyperv_tsc_page page; u8 reserved[PAGE_SIZE]; -} tsc_pg __aligned(PAGE_SIZE); +} tsc_pg __bss_decrypted __aligned(PAGE_SIZE); static struct ms_hyperv_tsc_page *tsc_page = &tsc_pg.page; static unsigned long tsc_pfn; From patchwork Mon May 15 16:59:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6650CC77B7D for ; Mon, 15 May 2023 17:00:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243372AbjEORAO (ORCPT ); Mon, 15 May 2023 13:00:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243419AbjEOQ7w (ORCPT ); Mon, 15 May 2023 12:59:52 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECD0A7EEA; Mon, 15 May 2023 09:59:40 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id 98e67ed59e1d1-24e01ba9e03so8921038a91.1; Mon, 15 May 2023 09:59:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169980; x=1686761980; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6zQXnge0b5ZqyPL5Rd5uANtDut+QBzK0iKFoX0pFkxQ=; b=BvU+b2eSYafA1CZTNC996wjbIPyvv1JH75praEmQOdo9/xKZDc9oZ/t4Rp2Lc/ZYF6 ir/dYi8ZeZgOtkG+k9ct1MCVTw7eHFUParr+xk87Cntnv1XY/978lHlR2sQN/rRNQgz6 4hA9Bt/5l0BT9Kq+Q29ui+v/3Ril4eZ6u9RqKb6lbmcM587rs9MQ4XOCqOyiHnxoH2IN 1FfBen7YgCx8lobJKl4Gf9Ek/cNN6z6U2UswRbm71IrEx95MfPo5QF98+M10wbbHv++P zPZay86Z8Ce7g6e8SGZ6smJSqPLehsNMzv4jkrXVUqbpzR3kjFqgEp83+/FXj8Q3BYKk wTgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169980; x=1686761980; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6zQXnge0b5ZqyPL5Rd5uANtDut+QBzK0iKFoX0pFkxQ=; b=ePbo/n/qJlwO1ejG2DZXwJGJdb/azBz/oZ2hLXUIk84Me835TLZ6lIbpeF8EspeBiB /QWZ2d8NgQl+USgG1JmTy3tly7qwG9l7bg09IaARo4E65KTtjrKsgBWIPQeYbhQCbDl5 AFe0GheVU1fsI16BY3jZVboDIC3zSoCuJXCNdTi1aVdbeexFgyuAiu9iBrZC14Wfe+8R 9NcJ2/bZqUHS2zKf/wfPKEH3p2G+SNmwfA4ytK7Gvku++SxAsW9HkPldOd7bHzucbb/l 0TtVnGIz2Kg79f1lZ2dpSzhXvxBW0a4tP5KWFj/4z/isLWJARIckJrk09CpqNrMgX3PI WkEQ== X-Gm-Message-State: AC+VfDzxm3WYSr9S62V3EctHTTUl3W13S095GqqLizLfP3jEKjTLa5Y5 +s/AxIn7jXMndNsc4Z2KIu8= X-Google-Smtp-Source: ACHHUZ6dorhuumzAo51DEW+/w6COB7KAFcyeCLzVuXhP+J+cY7L3VU0HQor4oOFTEtyhZzzIOAmnbQ== X-Received: by 2002:a17:90a:de91:b0:24b:2f85:13eb with SMTP id n17-20020a17090ade9100b0024b2f8513ebmr33622082pjv.30.1684169980288; Mon, 15 May 2023 09:59:40 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:39 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 10/14] hv: vmbus: Mask VMBus pages unencrypted for sev-snp enlightened guest Date: Mon, 15 May 2023 12:59:12 -0400 Message-Id: <20230515165917.1306922-11-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan VMBus post msg, synic event and message pages is necessary to shared with hypervisor and so mask these pages unencrypted in the sev-snp guest. Signed-off-by: Tianyu Lan --- Change sicne RFC V4: * Fix encrypt and free page order. Change since RFC V3: * Set encrypt page back in the hv_synic_free() Change since RFC V2: * Fix error in the error code path and encrypt pages correctly when decryption failure happens. --- drivers/hv/hv.c | 37 ++++++++++++++++++++++++++++++++++--- 1 file changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index de6708dbe0df..d29bbf0c7108 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "hyperv_vmbus.h" /* The one and only */ @@ -78,7 +79,7 @@ int hv_post_message(union hv_connection_id connection_id, int hv_synic_alloc(void) { - int cpu; + int cpu, ret; struct hv_per_cpu_context *hv_cpu; /* @@ -123,9 +124,29 @@ int hv_synic_alloc(void) goto err; } } + + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_message_page, 1); + if (ret) + goto err; + + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_event_page, 1); + if (ret) + goto err_decrypt_event_page; + + memset(hv_cpu->synic_message_page, 0, PAGE_SIZE); + memset(hv_cpu->synic_event_page, 0, PAGE_SIZE); + } } return 0; + +err_decrypt_event_page: + set_memory_encrypted((unsigned long) + hv_cpu->synic_message_page, 1); + err: /* * Any memory allocations that succeeded will be freed when @@ -143,8 +164,18 @@ void hv_synic_free(void) struct hv_per_cpu_context *hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu); - free_page((unsigned long)hv_cpu->synic_event_page); - free_page((unsigned long)hv_cpu->synic_message_page); + if (hv_isolation_type_en_snp()) { + if (!set_memory_encrypted((unsigned long) + hv_cpu->synic_message_page, 1)) + free_page((unsigned long)hv_cpu->synic_event_page); + + if (!set_memory_encrypted((unsigned long) + hv_cpu->synic_event_page, 1)) + free_page((unsigned long)hv_cpu->synic_message_page); + } else { + free_page((unsigned long)hv_cpu->synic_event_page); + free_page((unsigned long)hv_cpu->synic_message_page); + } } kfree(hv_context.hv_numa_map); From patchwork Mon May 15 16:59:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2428CC77B75 for ; Mon, 15 May 2023 17:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243504AbjEORAT (ORCPT ); Mon, 15 May 2023 13:00:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243426AbjEOQ7x (ORCPT ); Mon, 15 May 2023 12:59:53 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E11E17ED1; Mon, 15 May 2023 09:59:42 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id 41be03b00d2f7-52079a12451so9362365a12.3; Mon, 15 May 2023 09:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169982; x=1686761982; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z8lQptbWtTMcCpIUwC+SzQjIl5Lzr5bz7ZBsqcGRMAU=; b=ggG2ucog+cO9MP0g29R++Jl4Q999l/IlWujCrBvupWsv1Xv8zdI5xNVnlRkuVFQ1c5 Zbeq83nk3Mk20m2Zi1QdEzAEVtxBKk6kmF8Z6ivlSN6bNfFaWZap37KfrRf5vwxeatG2 XNu/8yAGK2LXSUeeEOnvGaEM1pWSAV0F1nJ4h0Q03yGZT5fC0nCKCNYA9/2gSzEoYMoU GOzSShy87dmcrIvnvF10FC9CcC1acdHlDmE/LQIWWMcv4kA1mjNQM179dLO8yz2Z7ajU zD1jV1GvChUETvOAFCZLa2CSc9aKKE5TDTYr42B6wlqZIZH3ueJkJmUt/DxooM9myn74 o/Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169982; x=1686761982; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z8lQptbWtTMcCpIUwC+SzQjIl5Lzr5bz7ZBsqcGRMAU=; b=O23qAYSRHAlYmGyppb6CXqFuRruVE7/vLfoZeiRX8w/zOnQgt/6xkH6JLE8tJwMEZF FYdkTmcRuQhE5g/sqen9Jj5k5e/cvIuw6PdkQLmjebKZWKTG5dr1S2nM5/XSTemYiWRP 7haF+IGW6rPgN9sA4CscOXQcl/vx8NWV0eUEMCs+GzPSEDHdnsL08ARUWPwo8MpDzmLw B8n3RXsqvsHu2+HVpz/zAO9vZ98vkj2W7M0snjMREwDRZShn0K0QKq9j5jm8L44ThPKh OZPT5CS6wiZkVW1r6dRX7pIMsYmTVoUQai31ZjpBXNdoqirYSfu8DjwDFPJtP8XrsLT2 pfgQ== X-Gm-Message-State: AC+VfDy3cZm2x9rgV7zqPZkHZkYQfDA5qjB7vMBQzVWiwBjsONfIGUDN XwAXGm34LfylaH3S6nThMP8= X-Google-Smtp-Source: ACHHUZ5saDduEOms1YHP54rO0xRRMeATG7Neg4Olzlw4vNsl6oHb2QKmTcWwMZMnsnMbOj6CWJ+YGg== X-Received: by 2002:a17:90b:4ac8:b0:24e:507:7408 with SMTP id mh8-20020a17090b4ac800b0024e05077408mr33818524pjb.37.1684169982187; Mon, 15 May 2023 09:59:42 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:41 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 11/14] drivers: hv: Decrypt percpu hvcall input arg page in sev-snp enlightened guest Date: Mon, 15 May 2023 12:59:13 -0400 Message-Id: <20230515165917.1306922-12-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hypervisor needs to access iput arg page and guest should decrypt the page. Signed-off-by: Tianyu Lan --- Change since RFC V4: * Use pgcount to free intput arg page Change since RFC V3: * Use pgcount to decrypt memory. Change since RFC V2: * Set inputarg to be zero after kfree() * Not free mem when fail to encrypt mem in the hv_common_cpu_die(). --- drivers/hv/hv_common.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 179bc5f5bf52..15d3054f3440 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -359,6 +360,7 @@ int hv_common_cpu_init(unsigned int cpu) u64 msr_vp_index; gfp_t flags; int pgcount = hv_root_partition ? 2 : 1; + int ret; /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */ flags = irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL; @@ -368,6 +370,17 @@ int hv_common_cpu_init(unsigned int cpu) if (!(*inputarg)) return -ENOMEM; + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long)*inputarg, pgcount); + if (ret) { + kfree(*inputarg); + *inputarg = NULL; + return ret; + } + + memset(*inputarg, 0x00, pgcount * PAGE_SIZE); + } + if (hv_root_partition) { outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg); *outputarg = (char *)(*inputarg) + HV_HYP_PAGE_SIZE; @@ -387,6 +400,7 @@ int hv_common_cpu_die(unsigned int cpu) { unsigned long flags; void **inputarg, **outputarg; + int pgcount = hv_root_partition ? 2 : 1; void *mem; local_irq_save(flags); @@ -402,7 +416,12 @@ int hv_common_cpu_die(unsigned int cpu) local_irq_restore(flags); - kfree(mem); + if (hv_isolation_type_en_snp()) { + if (!set_memory_encrypted((unsigned long)mem, pgcount)) + kfree(mem); + } else { + kfree(mem); + } return 0; } From patchwork Mon May 15 16:59:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 090F0C7EE2D for ; Mon, 15 May 2023 17:01:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243455AbjEORBV (ORCPT ); Mon, 15 May 2023 13:01:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243446AbjEOQ7y (ORCPT ); Mon, 15 May 2023 12:59:54 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C46D7EF2; Mon, 15 May 2023 09:59:44 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id 98e67ed59e1d1-24e1d272b09so9579753a91.1; Mon, 15 May 2023 09:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169983; x=1686761983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uuqW9g3Kzh89QsBclwPEw5z+GQNNv44ubXwM3rxo9bs=; b=WjhSLyyfkZ+DmhKUnLLxj+sUqsbLHlg9e06E2EbfzP29MJXAGncaO92k5DAYyAP6bn KZcb8ZuHEwPfQchpNDnXBz7jhj9a2kqefoBjlnd1GIEQBnykqsapQMfgs4vg6dMMYjVb wRspvsd6P2Yj9U8Rn3esOWk1RmHky++kBsCtGA0ixK3JF+eTOYYxTCqmMnQ5E8s0SiNB Dsly+S03ADlhdWwcYmpagB9yjK81MPy7dB5jBWAksBZUr/M/PHWpghpJnzwkL6KSKyhW 7Z5Rq3UFxNA3G0K13qwGR4MDJCKwNvxh7AKGrsHcTdLxqiZuq0hAckl6m8KItx7Pk5MX OZ0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169983; x=1686761983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uuqW9g3Kzh89QsBclwPEw5z+GQNNv44ubXwM3rxo9bs=; b=e5q50iXXUx3S1hl2oHTI293LafqT+bcmo45EfNc5PK27PWIXwZO62jILvieQEUawV1 bDoUBrebF47Hl2St0E3l+60jiRgL2hPQFDVfoNd3R2eVtegKSdPYQG5Evf0iRbdaf9J5 Awd6N7Xj8VAotnA0T917nc0jMlNAkRDYmkS+AQs4lCtP+nYdOTkzkDVYr42WbZJCS9KZ MDmCi5ofDGomDnFwsAcYphUGp1XzMUyDjE4435j84m+Kh0wdPyclvKOIZg2uecnVd5vc XzuxAOuXNc53/5hIxEGcxhzTVK9E1U3+IL1lK4dofkfnNLM++8uq4J2BgGnNesp0NGpk CGBw== X-Gm-Message-State: AC+VfDySWcElNupzCy1a4jSxHjl7FVdXyXv9J5VIjbwc97mCCdUAbulA KtCUA42v1UqwnNABSsHazqY= X-Google-Smtp-Source: ACHHUZ7b6649yFcCzLx6om2wHKRYIhFH3hiM3/2sh8UA/vkTQ1ZO0tZ4CbJTuMIjpE9E8k5iju1cRw== X-Received: by 2002:a17:90a:fa85:b0:24d:eb38:bfa0 with SMTP id cu5-20020a17090afa8500b0024deb38bfa0mr35205756pjb.11.1684169983601; Mon, 15 May 2023 09:59:43 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:43 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 12/14] x86/hyperv: Initialize cpu and memory for sev-snp enlightened guest Date: Mon, 15 May 2023 12:59:14 -0400 Message-Id: <20230515165917.1306922-13-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Read processor amd memory info from specific address which are populated by Hyper-V. Initialize smp cpu related ops, pvalidate system memory and add it into e820 table. Signed-off-by: Tianyu Lan --- Change since RFCv5: * Fix getting processor num in the hv_snp_get_smp_config() when ealry is false. Change since RFCv4: * Add mem info addr to get mem layout info --- arch/x86/hyperv/ivm.c | 87 +++++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 17 +++++++ arch/x86/kernel/cpu/mshyperv.c | 3 ++ 3 files changed, 107 insertions(+) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 368b2731950e..85e4378f052f 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -17,6 +17,11 @@ #include #include #include +#include +#include +#include +#include +#include #ifdef CONFIG_AMD_MEM_ENCRYPT @@ -57,6 +62,8 @@ union hv_ghcb { static u16 hv_ghcb_version __ro_after_init; +static u32 processor_count; + u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) { union hv_ghcb *hv_ghcb; @@ -356,6 +363,86 @@ static bool hv_is_private_mmio(u64 addr) return false; } +static __init void hv_snp_get_smp_config(unsigned int early) +{ + if (early) + return; + + /* + * There is no firmware and ACPI MADT table spport in + * in the Hyper-V SEV-SNP enlightened guest. Set smp + * related config variable. + */ + while (num_processors < processor_count) { + early_per_cpu(x86_cpu_to_apicid, num_processors) = num_processors; + early_per_cpu(x86_bios_cpu_apicid, num_processors) = num_processors; + physid_set(num_processors, phys_cpu_present_map); + set_cpu_possible(num_processors, true); + set_cpu_present(num_processors, true); + num_processors++; + } +} + +__init void hv_sev_init_mem_and_cpu(void) +{ + struct memory_map_entry *entry; + struct e820_entry *e820_entry; + u64 e820_end; + u64 ram_end; + u64 page; + + /* + * Hyper-V enlightened snp guest boots kernel + * directly without bootloader and so roms, + * bios regions and reserve resources are not + * available. Set these callback to NULL. + */ + x86_platform.legacy.rtc = 0; + x86_platform.legacy.reserve_bios_regions = 0; + x86_platform.set_wallclock = set_rtc_noop; + x86_platform.get_wallclock = get_rtc_noop; + x86_init.resources.probe_roms = x86_init_noop; + x86_init.resources.reserve_resources = x86_init_noop; + x86_init.mpparse.find_smp_config = x86_init_noop; + x86_init.mpparse.get_smp_config = hv_snp_get_smp_config; + + /* + * Hyper-V SEV-SNP enlightened guest doesn't support ioapic + * and legacy APIC page read/write. Switch to hv apic here. + */ + disable_ioapic_support(); + + /* Get processor and mem info. */ + processor_count = *(u32 *)__va(EN_SEV_SNP_PROCESSOR_INFO_ADDR); + entry = (struct memory_map_entry *)__va(EN_SEV_SNP_MEM_INFO_ADDR); + + /* + * There is no bootloader/EFI firmware in the SEV SNP guest. + * E820 table in the memory just describes memory for kernel, + * ACPI table, cmdline, boot params and ramdisk. The dynamic + * data(e.g, vcpu nnumber and the rest memory layout) needs to + * be read from EN_SEV_SNP_PROCESSOR_INFO_ADDR. + */ + for (; entry->numpages != 0; entry++) { + e820_entry = &e820_table->entries[ + e820_table->nr_entries - 1]; + e820_end = e820_entry->addr + e820_entry->size; + ram_end = (entry->starting_gpn + + entry->numpages) * PAGE_SIZE; + + if (e820_end < entry->starting_gpn * PAGE_SIZE) + e820_end = entry->starting_gpn * PAGE_SIZE; + + if (e820_end < ram_end) { + pr_info("Hyper-V: add e820 entry [mem %#018Lx-%#018Lx]\n", e820_end, ram_end - 1); + e820__range_add(e820_end, ram_end - e820_end, + E820_TYPE_RAM); + for (page = e820_end; page < ram_end; page += PAGE_SIZE) + pvalidate((unsigned long)__va(page), RMP_PG_SIZE_4K, true); + } + } +} + void __init hv_vtom_init(void) { /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 939373791249..84e024ffacd5 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -50,6 +50,21 @@ extern bool hv_isolation_type_en_snp(void); extern union hv_ghcb * __percpu *hv_ghcb_pg; +/* + * Hyper-V puts processor and memory layout info + * to this address in SEV-SNP enlightened guest. + */ +#define EN_SEV_SNP_PROCESSOR_INFO_ADDR 0x802000 +#define EN_SEV_SNP_MEM_INFO_ADDR 0x802018 + +struct memory_map_entry { + u64 starting_gpn; + u64 numpages; + u16 type; + u16 flags; + u32 reserved; +}; + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -255,12 +270,14 @@ void hv_ghcb_msr_read(u64 msr, u64 *value); bool hv_ghcb_negotiate_protocol(void); void hv_ghcb_terminate(unsigned int set, unsigned int reason); void hv_vtom_init(void); +void hv_sev_init_mem_and_cpu(void); #else static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} static inline bool hv_ghcb_negotiate_protocol(void) { return false; } static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {} static inline void hv_vtom_init(void) {} +static inline void hv_sev_init_mem_and_cpu(void) {} #endif extern bool hv_isolation_type_snp(void); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 63a2bfbfe701..dea9b881180b 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -529,6 +529,9 @@ static void __init ms_hyperv_init_platform(void) if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT)) mark_tsc_unstable("running on Hyper-V"); + if (hv_isolation_type_en_snp()) + hv_sev_init_mem_and_cpu(); + hardlockup_detector_disable(); } From patchwork Mon May 15 16:59:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F286C7EE2E for ; Mon, 15 May 2023 17:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243552AbjEORBX (ORCPT ); Mon, 15 May 2023 13:01:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243452AbjEOQ7z (ORCPT ); Mon, 15 May 2023 12:59:55 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A549E83D6; Mon, 15 May 2023 09:59:45 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-24decf5cc03so8890422a91.0; Mon, 15 May 2023 09:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169985; x=1686761985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=//ZRhI3n4yhgJLhYeoVlRlomR3esvtJzYMsDXfAjWgk=; b=O277mR6AeZ6O3pCVBqtjX3d8D5BPXcDw9He9/aVmC54Kkr8uZk6S1IEGQwQCK/cStv aViR+HQ7LHHmqhtbmFeWLi8YRFYZ26on9NuPx4PGWX1e2prFQNa3hooUsOd+xrFGSZ6H YVK1m/konDeRpswvTD6BSK9kYT0ffvabbf6K1VCr4IZhtOzo3EZpMKRI3RE+d6w9Tcdr EIe+tZPm73c0N1sB+cQwOLOL2aHKQoSbpJNNXGed7Smec3yTtSB34bN9HGOCqDZHsZ8l pYz7aok7tbSHrjoW8aCJW/dlW4XXjK6HXOxql6O5vzEB/yEHW2Ca4YvU79xeW0yVZTE6 91Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169985; x=1686761985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=//ZRhI3n4yhgJLhYeoVlRlomR3esvtJzYMsDXfAjWgk=; b=AKIMdUlNsiDLRZ/Ce47P1WXoBCcQS/S8a2FqWhrffRFsVxhOlUk+Ry+gswuPYqLyBc oLP1GNk3C55PvG3AE9VPyTGm7YdfBb1YfWjBCfHUwrrX9BvtCVP0KP+l1ORT6eQCopLN j8zJTRuj57yR4MIeLA2CHrdb0dK0k7a5B3taUIM+Mdvkdwy8WjcEuZReI97bCY4ykFw7 vf1lGaVLQuBXcE7B1+b+32kkHPBvuwnjgIPEL7ToxCnV94KfYHHYZbyfVG9o1P2N0/OG tLhStfFmkcOmncOIMP5uKu6LchXDNyOVhw+6Df50v7dNTXUPhMLBEJZTyshtzuqwB83H A1RQ== X-Gm-Message-State: AC+VfDwSEjJvbCk1GMEutSOKjKnDLb0p7cMEqPU6N1OAtYxsqjwyR9MV lKn4QxZ3ITJ+oUj6YDU4aSI= X-Google-Smtp-Source: ACHHUZ66+5lH6PHW4qTMZs/90S2c3zJNwkA1RaKoWryrbpkYgiC76UP9wHxV/6T+dbINTKhdFvS9Vg== X-Received: by 2002:a17:90a:604b:b0:249:748b:a232 with SMTP id h11-20020a17090a604b00b00249748ba232mr32833708pjm.25.1684169984988; Mon, 15 May 2023 09:59:44 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:44 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 13/14] x86/hyperv: Add smp support for sev-snp guest Date: Mon, 15 May 2023 12:59:15 -0400 Message-Id: <20230515165917.1306922-14-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan The wakeup_secondary_cpu callback was populated with wakeup_ cpu_via_vmgexit() which doesn't work for Hyper-V and Hyper-V requires to call Hyper-V specific hvcall to start APs. So override it with Hyper-V specific hook to start AP sev_es_save_area data structure. Signed-off-by: Tianyu Lan --- Change since RFC v5: * Remove some redundant structure definitions Change sicne RFC v3: * Replace struct sev_es_save_area with struct vmcb_save_area * Move code from mshyperv.c to ivm.c Change since RFC v2: * Add helper function to initialize segment * Fix some coding style --- arch/x86/hyperv/ivm.c | 98 +++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 10 ++++ arch/x86/kernel/cpu/mshyperv.c | 13 +++- include/asm-generic/hyperv-tlfs.h | 3 +- 4 files changed, 121 insertions(+), 3 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 85e4378f052f..b7b8e1ba8223 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -22,11 +22,15 @@ #include #include #include +#include #ifdef CONFIG_AMD_MEM_ENCRYPT #define GHCB_USAGE_HYPERV_CALL 1 +static u8 ap_start_input_arg[PAGE_SIZE] __bss_decrypted __aligned(PAGE_SIZE); +static u8 ap_start_stack[PAGE_SIZE] __aligned(PAGE_SIZE); + union hv_ghcb { struct ghcb ghcb; struct { @@ -443,6 +447,100 @@ __init void hv_sev_init_mem_and_cpu(void) } } +#define hv_populate_vmcb_seg(seg, gdtr_base) \ +do { \ + if (seg.selector) { \ + seg.base = 0; \ + seg.limit = HV_AP_SEGMENT_LIMIT; \ + seg.attrib = *(u16 *)(gdtr_base + seg.selector + 5); \ + seg.attrib = (seg.attrib & 0xFF) | ((seg.attrib >> 4) & 0xF00); \ + } \ +} while (0) \ + +int hv_snp_boot_ap(int cpu, unsigned long start_ip) +{ + struct sev_es_save_area *vmsa = (struct sev_es_save_area *) + __get_free_page(GFP_KERNEL | __GFP_ZERO); + struct desc_ptr gdtr; + u64 ret, rmp_adjust, retry = 5; + struct hv_enable_vp_vtl *start_vp_input; + unsigned long flags; + + native_store_gdt(&gdtr); + + vmsa->gdtr.base = gdtr.address; + vmsa->gdtr.limit = gdtr.size; + + asm volatile("movl %%es, %%eax;" : "=a" (vmsa->es.selector)); + hv_populate_vmcb_seg(vmsa->es, vmsa->gdtr.base); + + asm volatile("movl %%cs, %%eax;" : "=a" (vmsa->cs.selector)); + hv_populate_vmcb_seg(vmsa->cs, vmsa->gdtr.base); + + asm volatile("movl %%ss, %%eax;" : "=a" (vmsa->ss.selector)); + hv_populate_vmcb_seg(vmsa->ss, vmsa->gdtr.base); + + asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector)); + hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base); + + vmsa->efer = native_read_msr(MSR_EFER); + + asm volatile("movq %%cr4, %%rax;" : "=a" (vmsa->cr4)); + asm volatile("movq %%cr3, %%rax;" : "=a" (vmsa->cr3)); + asm volatile("movq %%cr0, %%rax;" : "=a" (vmsa->cr0)); + + vmsa->xcr0 = 1; + vmsa->g_pat = HV_AP_INIT_GPAT_DEFAULT; + vmsa->rip = (u64)secondary_startup_64_no_verify; + vmsa->rsp = (u64)&ap_start_stack[PAGE_SIZE]; + + /* + * Set the SNP-specific fields for this VMSA: + * VMPL level + * SEV_FEATURES (matches the SEV STATUS MSR right shifted 2 bits) + */; + vmsa->vmpl = 0; + vmsa->sev_features = sev_status >> 2; + + /* + * Running at VMPL0 allows the kernel to change the VMSA bit for a page + * using the RMPADJUST instruction. However, for the instruction to + * succeed it must target the permissions of a lesser privileged + * (higher numbered) VMPL level, so use VMPL1 (refer to the RMPADJUST + * instruction in the AMD64 APM Volume 3). + */ + rmp_adjust = RMPADJUST_VMSA_PAGE_BIT | 1; + ret = rmpadjust((unsigned long)vmsa, RMP_PG_SIZE_4K, + rmp_adjust); + if (ret != 0) { + pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret); + return ret; + } + + local_irq_save(flags); + start_vp_input = + (struct hv_enable_vp_vtl *)ap_start_input_arg; + memset(start_vp_input, 0, sizeof(*start_vp_input)); + start_vp_input->partition_id = -1; + start_vp_input->vp_index = cpu; + start_vp_input->target_vtl.target_vtl = ms_hyperv.vtl; + *(u64 *)&start_vp_input->vp_context = __pa(vmsa) | 1; + + do { + ret = hv_do_hypercall(HVCALL_START_VP, + start_vp_input, NULL); + } while (hv_result(ret) == HV_STATUS_TIME_OUT && retry--); + + if (!hv_result_success(ret)) { + pr_err("HvCallStartVirtualProcessor failed: %llx\n", ret); + goto done; + } + +done: + local_irq_restore(flags); + return ret; +} + void __init hv_vtom_init(void) { /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 84e024ffacd5..9ad2a0f21d68 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -65,6 +65,13 @@ struct memory_map_entry { u32 reserved; }; +/* + * DEFAULT INIT GPAT and SEGMENT LIMIT value in struct VMSA + * to start AP in enlightened SEV guest. + */ +#define HV_AP_INIT_GPAT_DEFAULT 0x0007040600070406ULL +#define HV_AP_SEGMENT_LIMIT 0xffffffff + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -263,6 +270,7 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #ifdef CONFIG_AMD_MEM_ENCRYPT void hv_ghcb_msr_write(u64 msr, u64 value); @@ -271,6 +279,7 @@ bool hv_ghcb_negotiate_protocol(void); void hv_ghcb_terminate(unsigned int set, unsigned int reason); void hv_vtom_init(void); void hv_sev_init_mem_and_cpu(void); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #else static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} @@ -278,6 +287,7 @@ static inline bool hv_ghcb_negotiate_protocol(void) { return false; } static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {} static inline void hv_vtom_init(void) {} static inline void hv_sev_init_mem_and_cpu(void) {} +static int hv_snp_boot_ap(int cpu, unsigned long start_ip) {} #endif extern bool hv_isolation_type_snp(void); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index dea9b881180b..0c5f9f7bd7ba 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -295,6 +295,16 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus) native_smp_prepare_cpus(max_cpus); + /* + * Override wakeup_secondary_cpu_64 callback for SEV-SNP + * enlightened guest. + */ + if (hv_isolation_type_en_snp()) + apic->wakeup_secondary_cpu_64 = hv_snp_boot_ap; + + if (!hv_root_partition) + return; + #ifdef CONFIG_X86_64 for_each_present_cpu(i) { if (i == 0) @@ -502,8 +512,7 @@ static void __init ms_hyperv_init_platform(void) # ifdef CONFIG_SMP smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; - if (hv_root_partition) - smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; + smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; # endif /* diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index f4e4cc4f965f..92dcc530350c 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -146,9 +146,9 @@ union hv_reference_tsc_msr { /* Declare the various hypercall operations. */ #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE 0x0002 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST 0x0003 -#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_NOTIFY_LONG_SPIN_WAIT 0x0008 #define HVCALL_SEND_IPI 0x000b +#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014 #define HVCALL_SEND_IPI_EX 0x0015 @@ -223,6 +223,7 @@ enum HV_GENERIC_SET_FORMAT { #define HV_STATUS_INVALID_PORT_ID 17 #define HV_STATUS_INVALID_CONNECTION_ID 18 #define HV_STATUS_INSUFFICIENT_BUFFERS 19 +#define HV_STATUS_TIME_OUT 120 #define HV_STATUS_VTL_ALREADY_ENABLED 134 /* From patchwork Mon May 15 16:59:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68200C7EE26 for ; Mon, 15 May 2023 17:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243460AbjEORB1 (ORCPT ); Mon, 15 May 2023 13:01:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243456AbjEOQ7z (ORCPT ); Mon, 15 May 2023 12:59:55 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 130E683CE; Mon, 15 May 2023 09:59:47 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1ab032d9266so120602405ad.0; Mon, 15 May 2023 09:59:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169986; x=1686761986; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cmY2ej1hy4vAs/f4OlrbKuvNcHo8SUtv7o8IcsrQg3U=; b=llobHjZ8CaIv83lisuWYuHqNS76/tNXm2lgpfFKCeP6ap94oN1gkuO+vAaGen3l7Zf /5N9F6v7G42Et49vEQ4VYj0iNh3BUmWx45NHrOwY6DFCAIrau+ZWoft2KLtHIvm+J6M9 GjXehqZjYbmt+TDyDM0qFuICTBIMGYCZJjMeAojhXpVR0uqz0+acO0YybhbjaQSHPA55 7cMs9Y+n6PFLSzYGnlLM4tkK5Vg0NZFew5KLtO3NBkZ3+zlJQIsyO86fR30/uLY0m003 iDD9GC0tTbV8Lpcm8XbGD1Cds3XMYs+m+DcjhoIwNKfuS9Tb7GXTMDNPUoFoWObxFFES E3RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169986; x=1686761986; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cmY2ej1hy4vAs/f4OlrbKuvNcHo8SUtv7o8IcsrQg3U=; b=UU5Q0/y4zuhNb40+zmiO122y1xSm3PTXBHhnMmuanAiuFU+fM1awCsG5UFzduwuyuc pmrsxW29Ke1Dh7f9FYzXRE+WJnNOHklGwFEj8B5gVO8ge5nGdMsgrmnlH6x19SmDxs57 Y7tZKDYR8E4GD/Irb+SKRx0M/nYbCuKPW6/w27EkMaM1wLwgCn3QnuRlfIVTRD7lIPuq 6fU46eph2U+oISgx+oTGa1uA+8gHNutTp9iXGQiFfqaCD4BhmCQiqX93h4aemCSHq8VG A8O2wZssfxR11/14fBBm9SIOHZplswKVA0kWPUWusTUcJKtgfOHIp5AY3Vr2LxuXEqjI L4Lw== X-Gm-Message-State: AC+VfDybsvqnf7wuu+SprOO44lBFmaDNjzPe2VBrLL2ogorXsSyDVUEq kz30QyUI46bC3MGvKvfJTs0= X-Google-Smtp-Source: ACHHUZ5ax8PMzMJyb4JE01QgIu1eN/8WniJ6gslKo//Mqlo5UEC9KUHdakOZkvioQ2T6mK0YMaHGpQ== X-Received: by 2002:a17:90a:e50d:b0:247:6a31:d59d with SMTP id t13-20020a17090ae50d00b002476a31d59dmr33951385pjy.1.1684169986489; Mon, 15 May 2023 09:59:46 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:46 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 14/14] x86/hyperv: Add hyperv-specific handling for VMMCALL under SEV-ES Date: Mon, 15 May 2023 12:59:16 -0400 Message-Id: <20230515165917.1306922-15-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add Hyperv-specific handling for faults caused by VMMCALL instructions. Signed-off-by: Tianyu Lan --- arch/x86/kernel/cpu/mshyperv.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 0c5f9f7bd7ba..3469b369e627 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -32,6 +32,7 @@ #include #include #include +#include /* Is Linux running as the root partition? */ bool hv_root_partition; @@ -577,6 +578,20 @@ static bool __init ms_hyperv_msi_ext_dest_id(void) return eax & HYPERV_VS_PROPERTIES_EAX_EXTENDED_IOAPIC_RTE; } +static void hv_sev_es_hcall_prepare(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* RAX and CPL are already in the GHCB */ + ghcb_set_rcx(ghcb, regs->cx); + ghcb_set_rdx(ghcb, regs->dx); + ghcb_set_r8(ghcb, regs->r8); +} + +static bool hv_sev_es_hcall_finish(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* No checking of the return state needed */ + return true; +} + const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .name = "Microsoft Hyper-V", .detect = ms_hyperv_platform, @@ -584,4 +599,6 @@ const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .init.x2apic_available = ms_hyperv_x2apic_available, .init.msi_ext_dest_id = ms_hyperv_msi_ext_dest_id, .init.init_platform = ms_hyperv_init_platform, + .runtime.sev_es_hcall_prepare = hv_sev_es_hcall_prepare, + .runtime.sev_es_hcall_finish = hv_sev_es_hcall_finish, };