From patchwork Mon May 15 16:59:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13241843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F286C7EE2E for ; Mon, 15 May 2023 17:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243552AbjEORBX (ORCPT ); Mon, 15 May 2023 13:01:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243452AbjEOQ7z (ORCPT ); Mon, 15 May 2023 12:59:55 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A549E83D6; Mon, 15 May 2023 09:59:45 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-24decf5cc03so8890422a91.0; Mon, 15 May 2023 09:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684169985; x=1686761985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=//ZRhI3n4yhgJLhYeoVlRlomR3esvtJzYMsDXfAjWgk=; b=O277mR6AeZ6O3pCVBqtjX3d8D5BPXcDw9He9/aVmC54Kkr8uZk6S1IEGQwQCK/cStv aViR+HQ7LHHmqhtbmFeWLi8YRFYZ26on9NuPx4PGWX1e2prFQNa3hooUsOd+xrFGSZ6H YVK1m/konDeRpswvTD6BSK9kYT0ffvabbf6K1VCr4IZhtOzo3EZpMKRI3RE+d6w9Tcdr EIe+tZPm73c0N1sB+cQwOLOL2aHKQoSbpJNNXGed7Smec3yTtSB34bN9HGOCqDZHsZ8l pYz7aok7tbSHrjoW8aCJW/dlW4XXjK6HXOxql6O5vzEB/yEHW2Ca4YvU79xeW0yVZTE6 91Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684169985; x=1686761985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=//ZRhI3n4yhgJLhYeoVlRlomR3esvtJzYMsDXfAjWgk=; b=AKIMdUlNsiDLRZ/Ce47P1WXoBCcQS/S8a2FqWhrffRFsVxhOlUk+Ry+gswuPYqLyBc oLP1GNk3C55PvG3AE9VPyTGm7YdfBb1YfWjBCfHUwrrX9BvtCVP0KP+l1ORT6eQCopLN j8zJTRuj57yR4MIeLA2CHrdb0dK0k7a5B3taUIM+Mdvkdwy8WjcEuZReI97bCY4ykFw7 vf1lGaVLQuBXcE7B1+b+32kkHPBvuwnjgIPEL7ToxCnV94KfYHHYZbyfVG9o1P2N0/OG tLhStfFmkcOmncOIMP5uKu6LchXDNyOVhw+6Df50v7dNTXUPhMLBEJZTyshtzuqwB83H A1RQ== X-Gm-Message-State: AC+VfDwSEjJvbCk1GMEutSOKjKnDLb0p7cMEqPU6N1OAtYxsqjwyR9MV lKn4QxZ3ITJ+oUj6YDU4aSI= X-Google-Smtp-Source: ACHHUZ66+5lH6PHW4qTMZs/90S2c3zJNwkA1RaKoWryrbpkYgiC76UP9wHxV/6T+dbINTKhdFvS9Vg== X-Received: by 2002:a17:90a:604b:b0:249:748b:a232 with SMTP id h11-20020a17090a604b00b00249748ba232mr32833708pjm.25.1684169984988; Mon, 15 May 2023 09:59:44 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:85bb:dfc8:391f:ff73]) by smtp.gmail.com with ESMTPSA id x13-20020a17090aa38d00b0024df6bbf5d8sm2151pjp.30.2023.05.15.09.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 May 2023 09:59:44 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V6 13/14] x86/hyperv: Add smp support for sev-snp guest Date: Mon, 15 May 2023 12:59:15 -0400 Message-Id: <20230515165917.1306922-14-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515165917.1306922-1-ltykernel@gmail.com> References: <20230515165917.1306922-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan The wakeup_secondary_cpu callback was populated with wakeup_ cpu_via_vmgexit() which doesn't work for Hyper-V and Hyper-V requires to call Hyper-V specific hvcall to start APs. So override it with Hyper-V specific hook to start AP sev_es_save_area data structure. Signed-off-by: Tianyu Lan --- Change since RFC v5: * Remove some redundant structure definitions Change sicne RFC v3: * Replace struct sev_es_save_area with struct vmcb_save_area * Move code from mshyperv.c to ivm.c Change since RFC v2: * Add helper function to initialize segment * Fix some coding style --- arch/x86/hyperv/ivm.c | 98 +++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 10 ++++ arch/x86/kernel/cpu/mshyperv.c | 13 +++- include/asm-generic/hyperv-tlfs.h | 3 +- 4 files changed, 121 insertions(+), 3 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 85e4378f052f..b7b8e1ba8223 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -22,11 +22,15 @@ #include #include #include +#include #ifdef CONFIG_AMD_MEM_ENCRYPT #define GHCB_USAGE_HYPERV_CALL 1 +static u8 ap_start_input_arg[PAGE_SIZE] __bss_decrypted __aligned(PAGE_SIZE); +static u8 ap_start_stack[PAGE_SIZE] __aligned(PAGE_SIZE); + union hv_ghcb { struct ghcb ghcb; struct { @@ -443,6 +447,100 @@ __init void hv_sev_init_mem_and_cpu(void) } } +#define hv_populate_vmcb_seg(seg, gdtr_base) \ +do { \ + if (seg.selector) { \ + seg.base = 0; \ + seg.limit = HV_AP_SEGMENT_LIMIT; \ + seg.attrib = *(u16 *)(gdtr_base + seg.selector + 5); \ + seg.attrib = (seg.attrib & 0xFF) | ((seg.attrib >> 4) & 0xF00); \ + } \ +} while (0) \ + +int hv_snp_boot_ap(int cpu, unsigned long start_ip) +{ + struct sev_es_save_area *vmsa = (struct sev_es_save_area *) + __get_free_page(GFP_KERNEL | __GFP_ZERO); + struct desc_ptr gdtr; + u64 ret, rmp_adjust, retry = 5; + struct hv_enable_vp_vtl *start_vp_input; + unsigned long flags; + + native_store_gdt(&gdtr); + + vmsa->gdtr.base = gdtr.address; + vmsa->gdtr.limit = gdtr.size; + + asm volatile("movl %%es, %%eax;" : "=a" (vmsa->es.selector)); + hv_populate_vmcb_seg(vmsa->es, vmsa->gdtr.base); + + asm volatile("movl %%cs, %%eax;" : "=a" (vmsa->cs.selector)); + hv_populate_vmcb_seg(vmsa->cs, vmsa->gdtr.base); + + asm volatile("movl %%ss, %%eax;" : "=a" (vmsa->ss.selector)); + hv_populate_vmcb_seg(vmsa->ss, vmsa->gdtr.base); + + asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector)); + hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base); + + vmsa->efer = native_read_msr(MSR_EFER); + + asm volatile("movq %%cr4, %%rax;" : "=a" (vmsa->cr4)); + asm volatile("movq %%cr3, %%rax;" : "=a" (vmsa->cr3)); + asm volatile("movq %%cr0, %%rax;" : "=a" (vmsa->cr0)); + + vmsa->xcr0 = 1; + vmsa->g_pat = HV_AP_INIT_GPAT_DEFAULT; + vmsa->rip = (u64)secondary_startup_64_no_verify; + vmsa->rsp = (u64)&ap_start_stack[PAGE_SIZE]; + + /* + * Set the SNP-specific fields for this VMSA: + * VMPL level + * SEV_FEATURES (matches the SEV STATUS MSR right shifted 2 bits) + */; + vmsa->vmpl = 0; + vmsa->sev_features = sev_status >> 2; + + /* + * Running at VMPL0 allows the kernel to change the VMSA bit for a page + * using the RMPADJUST instruction. However, for the instruction to + * succeed it must target the permissions of a lesser privileged + * (higher numbered) VMPL level, so use VMPL1 (refer to the RMPADJUST + * instruction in the AMD64 APM Volume 3). + */ + rmp_adjust = RMPADJUST_VMSA_PAGE_BIT | 1; + ret = rmpadjust((unsigned long)vmsa, RMP_PG_SIZE_4K, + rmp_adjust); + if (ret != 0) { + pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret); + return ret; + } + + local_irq_save(flags); + start_vp_input = + (struct hv_enable_vp_vtl *)ap_start_input_arg; + memset(start_vp_input, 0, sizeof(*start_vp_input)); + start_vp_input->partition_id = -1; + start_vp_input->vp_index = cpu; + start_vp_input->target_vtl.target_vtl = ms_hyperv.vtl; + *(u64 *)&start_vp_input->vp_context = __pa(vmsa) | 1; + + do { + ret = hv_do_hypercall(HVCALL_START_VP, + start_vp_input, NULL); + } while (hv_result(ret) == HV_STATUS_TIME_OUT && retry--); + + if (!hv_result_success(ret)) { + pr_err("HvCallStartVirtualProcessor failed: %llx\n", ret); + goto done; + } + +done: + local_irq_restore(flags); + return ret; +} + void __init hv_vtom_init(void) { /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 84e024ffacd5..9ad2a0f21d68 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -65,6 +65,13 @@ struct memory_map_entry { u32 reserved; }; +/* + * DEFAULT INIT GPAT and SEGMENT LIMIT value in struct VMSA + * to start AP in enlightened SEV guest. + */ +#define HV_AP_INIT_GPAT_DEFAULT 0x0007040600070406ULL +#define HV_AP_SEGMENT_LIMIT 0xffffffff + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -263,6 +270,7 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #ifdef CONFIG_AMD_MEM_ENCRYPT void hv_ghcb_msr_write(u64 msr, u64 value); @@ -271,6 +279,7 @@ bool hv_ghcb_negotiate_protocol(void); void hv_ghcb_terminate(unsigned int set, unsigned int reason); void hv_vtom_init(void); void hv_sev_init_mem_and_cpu(void); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #else static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} @@ -278,6 +287,7 @@ static inline bool hv_ghcb_negotiate_protocol(void) { return false; } static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {} static inline void hv_vtom_init(void) {} static inline void hv_sev_init_mem_and_cpu(void) {} +static int hv_snp_boot_ap(int cpu, unsigned long start_ip) {} #endif extern bool hv_isolation_type_snp(void); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index dea9b881180b..0c5f9f7bd7ba 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -295,6 +295,16 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus) native_smp_prepare_cpus(max_cpus); + /* + * Override wakeup_secondary_cpu_64 callback for SEV-SNP + * enlightened guest. + */ + if (hv_isolation_type_en_snp()) + apic->wakeup_secondary_cpu_64 = hv_snp_boot_ap; + + if (!hv_root_partition) + return; + #ifdef CONFIG_X86_64 for_each_present_cpu(i) { if (i == 0) @@ -502,8 +512,7 @@ static void __init ms_hyperv_init_platform(void) # ifdef CONFIG_SMP smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; - if (hv_root_partition) - smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; + smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; # endif /* diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index f4e4cc4f965f..92dcc530350c 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -146,9 +146,9 @@ union hv_reference_tsc_msr { /* Declare the various hypercall operations. */ #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE 0x0002 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST 0x0003 -#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_NOTIFY_LONG_SPIN_WAIT 0x0008 #define HVCALL_SEND_IPI 0x000b +#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014 #define HVCALL_SEND_IPI_EX 0x0015 @@ -223,6 +223,7 @@ enum HV_GENERIC_SET_FORMAT { #define HV_STATUS_INVALID_PORT_ID 17 #define HV_STATUS_INVALID_CONNECTION_ID 18 #define HV_STATUS_INSUFFICIENT_BUFFERS 19 +#define HV_STATUS_TIME_OUT 120 #define HV_STATUS_VTL_ALREADY_ENABLED 134 /*