From patchwork Thu Oct 21 17:42:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12575881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CFD5C433EF for ; Thu, 21 Oct 2021 17:45:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2FC6960F9F for ; Thu, 21 Oct 2021 17:45:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231308AbhJURrm (ORCPT ); Thu, 21 Oct 2021 13:47:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232480AbhJURrB (ORCPT ); Thu, 21 Oct 2021 13:47:01 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30B98C0432C7 for ; Thu, 21 Oct 2021 10:43:09 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id g73-20020a62524c000000b0044dfaec37c7so820851pfb.6 for ; Thu, 21 Oct 2021 10:43:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l3vWjzyfVMSaVY9jnZJTJ5KFnHzwc9BXiUB7dQu8kk0=; b=Pop8g632/KzHNOww53q0I9OB8LiYpyX7zKU9YTIC+po/DaArz27ZYDOdWvRDFdl6Ns SgBZR30hRdTk3gqUvDpb03yAkZR6WprfuIgmJoBkfCoaNk682+G6ryg0EArZjnTo6nSe 9bl+6m7hUBPIVmLhzPG3c7A/wi0DHqo0FEDbnl9VogCu3fzkzC3Ji/5cZJSDnihfZBDz 3UaUS3VR04bqqTnYsnRKuhgsbZzCT7geH6d46uNt7h8NyciCtH0NAsFQ5QwupzbeWwKG G8PyCn/B8FWI5o726i5j1rJ/dNifWakmXsmyZ9ZlgBFv3XBXER+V7BkHqRa5Ve/JnaqT W/OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l3vWjzyfVMSaVY9jnZJTJ5KFnHzwc9BXiUB7dQu8kk0=; b=lZY3xDnF5zTNV18kK7BXFtgZpE+kFVLMguMpplUJc1AjhrGD0cBFbp4NmzkewX4Uy7 NVho9QQ70Ra1+J0N8hSaGM8SOSpeiAv1KWPmQRZLu9BNo12w2uViQSndjAkQUtDzmKO3 1sB41JIl9oy6Nggp5UkOLSdcGgDb8o/p30J70fzwLdU0IkK/BfVolUHrHUfUCjNdpA+n jAXe6OK/7fdOsF92hGPuwM/vOKw1pXV+wmikUcwQF+sl14hbueoVNfEXqf3FLARXBEpt jUTqeiBQevyqeINKGGqvd42JG2zBQ0dJ/QVv4MG6PdZcydNwylpF4llGbmwP7JLS7nys wc1g== X-Gm-Message-State: AOAM532NBiErwrZ02v6i85C394YIPKtX1RGDgZQvPPwWsDSfUZ//+jEy 9f6yRVYY97fCW70kTi3LvvllUU0Knf1D5YaRP1HUKCZOOCnfOCNUqLM4YOq6OMBBllhz2x5Fjen fEqIWnVnyEHnQ0rzwic386Clq1ja8GhKAqQnlC17iVFlnUPB7vuEtFqpgug== X-Google-Smtp-Source: ABdhPJxqWqFV/bP+jY2N1zns3eqfolyGwtGA//b6FeCnrZqggyLxg7D4+d1ktCRm7YhysBKtULocml4cHHE= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:da2d:dcb2:6:add9]) (user=pgonda job=sendgmr) by 2002:a17:902:e848:b0:13e:f908:155a with SMTP id t8-20020a170902e84800b0013ef908155amr6531137plg.13.1634838188442; Thu, 21 Oct 2021 10:43:08 -0700 (PDT) Date: Thu, 21 Oct 2021 10:42:59 -0700 In-Reply-To: <20211021174303.385706-1-pgonda@google.com> Message-Id: <20211021174303.385706-2-pgonda@google.com> Mime-Version: 1.0 References: <20211021174303.385706-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH 1/5 V11] KVM: SEV: Refactor out sev_es_state struct From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Tom Lendacky , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move SEV-ES vCPU metadata into new sev_es_state struct from vcpu_svm. Signed-off-by: Peter Gonda Suggested-by: Tom Lendacky Acked-by: Tom Lendacky Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 83 +++++++++++++++++++++--------------------- arch/x86/kvm/svm/svm.c | 8 ++-- arch/x86/kvm/svm/svm.h | 26 +++++++------ 3 files changed, 61 insertions(+), 56 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0d21d59936e5..109b223c0b58 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -590,7 +590,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) * traditional VMSA as it has been built so far (in prep * for LAUNCH_UPDATE_VMSA) to be the initial SEV-ES state. */ - memcpy(svm->vmsa, save, sizeof(*save)); + memcpy(svm->sev_es.vmsa, save, sizeof(*save)); return 0; } @@ -612,11 +612,11 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, * the VMSA memory content (i.e it will write the same memory region * with the guest's key), so invalidate it first. */ - clflush_cache_range(svm->vmsa, PAGE_SIZE); + clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE); vmsa.reserved = 0; vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; - vmsa.address = __sme_pa(svm->vmsa); + vmsa.address = __sme_pa(svm->sev_es.vmsa); vmsa.len = PAGE_SIZE; ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); if (ret) @@ -2031,16 +2031,16 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) svm = to_svm(vcpu); if (vcpu->arch.guest_state_protected) - sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE); - __free_page(virt_to_page(svm->vmsa)); + sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE); + __free_page(virt_to_page(svm->sev_es.vmsa)); - if (svm->ghcb_sa_free) - kfree(svm->ghcb_sa); + if (svm->sev_es.ghcb_sa_free) + kfree(svm->sev_es.ghcb_sa); } static void dump_ghcb(struct vcpu_svm *svm) { - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; unsigned int nbits; /* Re-use the dump_invalid_vmcb module parameter */ @@ -2066,7 +2066,7 @@ static void dump_ghcb(struct vcpu_svm *svm) static void sev_es_sync_to_ghcb(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; /* * The GHCB protocol so far allows for the following data @@ -2086,7 +2086,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; struct kvm_vcpu *vcpu = &svm->vcpu; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; u64 exit_code; /* @@ -2133,7 +2133,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) struct ghcb *ghcb; u64 exit_code = 0; - ghcb = svm->ghcb; + ghcb = svm->sev_es.ghcb; /* Only GHCB Usage code 0 is supported */ if (ghcb->ghcb_usage) @@ -2251,33 +2251,34 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) void sev_es_unmap_ghcb(struct vcpu_svm *svm) { - if (!svm->ghcb) + if (!svm->sev_es.ghcb) return; - if (svm->ghcb_sa_free) { + if (svm->sev_es.ghcb_sa_free) { /* * The scratch area lives outside the GHCB, so there is a * buffer that, depending on the operation performed, may * need to be synced, then freed. */ - if (svm->ghcb_sa_sync) { + if (svm->sev_es.ghcb_sa_sync) { kvm_write_guest(svm->vcpu.kvm, - ghcb_get_sw_scratch(svm->ghcb), - svm->ghcb_sa, svm->ghcb_sa_len); - svm->ghcb_sa_sync = false; + ghcb_get_sw_scratch(svm->sev_es.ghcb), + svm->sev_es.ghcb_sa, + svm->sev_es.ghcb_sa_len); + svm->sev_es.ghcb_sa_sync = false; } - kfree(svm->ghcb_sa); - svm->ghcb_sa = NULL; - svm->ghcb_sa_free = false; + kfree(svm->sev_es.ghcb_sa); + svm->sev_es.ghcb_sa = NULL; + svm->sev_es.ghcb_sa_free = false; } - trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->ghcb); + trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->sev_es.ghcb); sev_es_sync_to_ghcb(svm); - kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true); - svm->ghcb = NULL; + kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map, true); + svm->sev_es.ghcb = NULL; } void pre_sev_run(struct vcpu_svm *svm, int cpu) @@ -2307,7 +2308,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) { struct vmcb_control_area *control = &svm->vmcb->control; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; u64 ghcb_scratch_beg, ghcb_scratch_end; u64 scratch_gpa_beg, scratch_gpa_end; void *scratch_va; @@ -2343,7 +2344,7 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) return false; } - scratch_va = (void *)svm->ghcb; + scratch_va = (void *)svm->sev_es.ghcb; scratch_va += (scratch_gpa_beg - control->ghcb_gpa); } else { /* @@ -2373,12 +2374,12 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) * the vCPU next time (i.e. a read was requested so the data * must be written back to the guest memory). */ - svm->ghcb_sa_sync = sync; - svm->ghcb_sa_free = true; + svm->sev_es.ghcb_sa_sync = sync; + svm->sev_es.ghcb_sa_free = true; } - svm->ghcb_sa = scratch_va; - svm->ghcb_sa_len = len; + svm->sev_es.ghcb_sa = scratch_va; + svm->sev_es.ghcb_sa_len = len; return true; } @@ -2497,15 +2498,15 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) return -EINVAL; } - if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->ghcb_map)) { + if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) { /* Unable to map GHCB from guest */ vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n", ghcb_gpa); return -EINVAL; } - svm->ghcb = svm->ghcb_map.hva; - ghcb = svm->ghcb_map.hva; + svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva; + ghcb = svm->sev_es.ghcb_map.hva; trace_kvm_vmgexit_enter(vcpu->vcpu_id, ghcb); @@ -2528,7 +2529,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) ret = kvm_sev_es_mmio_read(vcpu, control->exit_info_1, control->exit_info_2, - svm->ghcb_sa); + svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_MMIO_WRITE: if (!setup_vmgexit_scratch(svm, false, control->exit_info_2)) @@ -2537,7 +2538,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) ret = kvm_sev_es_mmio_write(vcpu, control->exit_info_1, control->exit_info_2, - svm->ghcb_sa); + svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_NMI_COMPLETE: ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET); @@ -2587,8 +2588,8 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in) if (!setup_vmgexit_scratch(svm, in, svm->vmcb->control.exit_info_2)) return -EINVAL; - return kvm_sev_es_string_io(&svm->vcpu, size, port, - svm->ghcb_sa, svm->ghcb_sa_len / size, in); + return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa, + svm->sev_es.ghcb_sa_len / size, in); } void sev_es_init_vmcb(struct vcpu_svm *svm) @@ -2603,7 +2604,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm) * VMCB page. Do not include the encryption mask on the VMSA physical * address since hardware will access it using the guest key. */ - svm->vmcb->control.vmsa_pa = __pa(svm->vmsa); + svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); /* Can't intercept CR register access, HV can't modify CR registers */ svm_clr_intercept(svm, INTERCEPT_CR0_READ); @@ -2675,8 +2676,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) struct vcpu_svm *svm = to_svm(vcpu); /* First SIPI: Use the values as initially set by the VMM */ - if (!svm->received_first_sipi) { - svm->received_first_sipi = true; + if (!svm->sev_es.received_first_sipi) { + svm->sev_es.received_first_sipi = true; return; } @@ -2685,8 +2686,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a * non-zero value. */ - if (!svm->ghcb) + if (!svm->sev_es.ghcb) return; - ghcb_set_sw_exit_info_2(svm->ghcb, 1); + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 989685098b3e..68294491c23d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1372,7 +1372,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT); if (vmsa_page) - svm->vmsa = page_address(vmsa_page); + svm->sev_es.vmsa = page_address(vmsa_page); svm->guest_state_loaded = false; @@ -2762,11 +2762,11 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err) { struct vcpu_svm *svm = to_svm(vcpu); - if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->ghcb)) + if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->sev_es.ghcb)) return kvm_complete_insn_gp(vcpu, err); - ghcb_set_sw_exit_info_1(svm->ghcb, 1); - ghcb_set_sw_exit_info_2(svm->ghcb, + ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 1); + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, X86_TRAP_GP | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5d30db599e10..6d8d762d208f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -123,6 +123,20 @@ struct svm_nested_state { bool initialized; }; +struct vcpu_sev_es_state { + /* SEV-ES support */ + struct vmcb_save_area *vmsa; + struct ghcb *ghcb; + struct kvm_host_map ghcb_map; + bool received_first_sipi; + + /* SEV-ES scratch area support */ + void *ghcb_sa; + u32 ghcb_sa_len; + bool ghcb_sa_sync; + bool ghcb_sa_free; +}; + struct vcpu_svm { struct kvm_vcpu vcpu; /* vmcb always points at current_vmcb->ptr, it's purely a shorthand. */ @@ -183,17 +197,7 @@ struct vcpu_svm { DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS); } shadow_msr_intercept; - /* SEV-ES support */ - struct vmcb_save_area *vmsa; - struct ghcb *ghcb; - struct kvm_host_map ghcb_map; - bool received_first_sipi; - - /* SEV-ES scratch area support */ - void *ghcb_sa; - u32 ghcb_sa_len; - bool ghcb_sa_sync; - bool ghcb_sa_free; + struct vcpu_sev_es_state sev_es; bool guest_state_loaded; }; From patchwork Thu Oct 21 17:43:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12575883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC984C433F5 for ; Thu, 21 Oct 2021 17:45:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A651F60F9F for ; Thu, 21 Oct 2021 17:45:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232631AbhJURrt (ORCPT ); Thu, 21 Oct 2021 13:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232563AbhJURrC (ORCPT ); Thu, 21 Oct 2021 13:47:02 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1EF5C0432C9 for ; Thu, 21 Oct 2021 10:43:10 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id z23-20020a17090ab11700b001a070e36178so772591pjq.3 for ; Thu, 21 Oct 2021 10:43:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GM/73rFyOeaFxCM1RLNv0seYsNoPuU+uZe804g2V+sw=; b=NHWmn4sQmqx5Nn6EhnO5PnZknz5YrGiAuRm8pkVQqfLzVo22XSknG0ZkCbgKymipwu +PvQYQiUZteghv91rIHrYAB77vGk6waeWz+KjQQVfnV6Mbj0zTPSNgLBXTgHENnQ++k1 ZjgvFtu7wTzLOGd0nAH2gry3PI9Qsfnn6V88eMTrykdbD2gNCuaFR9adPxlESNmrfbSj K3jyllTXRsZVmnaQp0dso5v6iu1tESaQQsL30inE9TT/cTwiUJPE7WfNxw88MT6AHWET no2kIt3o3xnCyxLPcpmNCKyV6YYczFtWlohuBJ3u7shhKUdXpT61wv15DPIDZr6O67w1 HJZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GM/73rFyOeaFxCM1RLNv0seYsNoPuU+uZe804g2V+sw=; b=r8pxYIu9iRL/iRkj/Zx0f/6NI+kkI5D1bVy479xyHzjI3Qw/8B0pQtOFtKjRsEePpL 4rarTfw3Om5NjHF1Ua5xvWpFPgBYZRfKGXp26eWX1qWolsmV995Fdtf2Ym6qBbUxMBh+ UNUJhglGD08zX3G6YvhdU1Me3/4uRHfFEKcUPtSn1JY8Kf30NEqjGAXrbG7g2z0juiHF xLYrujONwmK3BYmBv5qTYWMJjITMABfWs6BFaHVSE+JDEumEoTd3NyLDn5eqCNzjd9ex XCwVBoqbnSZXDvnOMq/4DIlgg5h/0spNJQ4knNcSyk48qD4WdTCxFHHcPqOSmLsgLMNH 8DQg== X-Gm-Message-State: AOAM531XVFHb/R3+swuiOaGscWFiQnsgUQpQI/F5l75t6ZlEwqXyvyFx rs5UCkHKyRfWvMqJNJrxzhc08xj8qKHRd8ZVHxmo5O+btNBG0QaRF7Ikr6cmKrI2wvMH/jS1bmz SCT2VN6unxtHLg+xBHL81OXf77p2xvhvX/uFphv+eJFgjGcbtkrnTx/rMZw== X-Google-Smtp-Source: ABdhPJwQJTihnsj2SVeqdqeQd4IyUzG2eQBW3ccjmYxTH/PJnRk02GNxWo0AQH/jQtizN54BL+nFypdCfUk= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:da2d:dcb2:6:add9]) (user=pgonda job=sendgmr) by 2002:a63:a119:: with SMTP id b25mr5483958pgf.358.1634838190395; Thu, 21 Oct 2021 10:43:10 -0700 (PDT) Date: Thu, 21 Oct 2021 10:43:00 -0700 In-Reply-To: <20211021174303.385706-1-pgonda@google.com> Message-Id: <20211021174303.385706-3-pgonda@google.com> Mime-Version: 1.0 References: <20211021174303.385706-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH V11 2/5] KVM: SEV: Add support for SEV intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Tom Lendacky , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV to work with intra host migration, contents of the SEV info struct such as the ASID (used to index the encryption key in the AMD SP) and the list of memory regions need to be transferred to the target VM. This change adds a commands for a target VMM to get a source SEV VM's sev info. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/api.rst | 15 ++++ arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 137 ++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 2 + arch/x86/kvm/x86.c | 6 ++ include/uapi/linux/kvm.h | 1 + 7 files changed, 163 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a6729c8cf063..df6f56d689e2 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6706,6 +6706,21 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM +------------------------------------- + +Architectures: x86 SEV enabled +Type: vm +Parameters: args[0] is the fd of the source vm +Returns: 0 on success + +This capability enables userspace to migrate the encryption context from the VM +indicated by the fd to the VM this is called on. + +This is intended to support intra-host migration of VMs between userspace VMMs. +in-guest workloads scheduled by the host. This allows for upgrading the VMM +process without interrupting the guest. + 8. Other capabilities. ====================== diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8f48a7ec577..422869707466 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1468,6 +1468,7 @@ struct kvm_x86_ops { int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); + int (*vm_migrate_protected_vm_from)(struct kvm *kvm, unsigned int source_fd); int (*get_msr_feature)(struct kvm_msr_entry *entry); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 109b223c0b58..2c2f724c9096 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1529,6 +1529,143 @@ static bool cmd_allowed_from_miror(u32 cmd_id) return false; } +static int sev_lock_for_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + /* + * Bail if this VM is already involved in a migration to avoid deadlock + * between two VMs trying to migrate to/from each other. + */ + if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) + return -EBUSY; + + mutex_lock(&kvm->lock); + + return 0; +} + +static void sev_unlock_after_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + mutex_unlock(&kvm->lock); + atomic_set_release(&sev->migration_in_progress, 0); +} + + +static int sev_lock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i, j; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (mutex_lock_killable(&vcpu->mutex)) + goto out_unlock; + } + + return 0; + +out_unlock: + kvm_for_each_vcpu(j, vcpu, kvm) { + if (i == j) + break; + + mutex_unlock(&vcpu->mutex); + } + return -EINTR; +} + +static void sev_unlock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + mutex_unlock(&vcpu->mutex); + } +} + +static void sev_migrate_from(struct kvm_sev_info *dst, + struct kvm_sev_info *src) +{ + dst->active = true; + dst->asid = src->asid; + dst->misc_cg = src->misc_cg; + dst->handle = src->handle; + dst->pages_locked = src->pages_locked; + + src->asid = 0; + src->active = false; + src->handle = 0; + src->pages_locked = 0; + src->misc_cg = NULL; + + INIT_LIST_HEAD(&dst->regions_list); + list_replace_init(&src->regions_list, &dst->regions_list); +} + +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) +{ + struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; + struct file *source_kvm_file; + struct kvm *source_kvm; + struct kvm_vcpu *vcpu; + int i, ret; + + ret = sev_lock_for_migration(kvm); + if (ret) + return ret; + + if (sev_guest(kvm)) { + ret = -EINVAL; + goto out_unlock; + } + + source_kvm_file = fget(source_fd); + if (!file_is_kvm(source_kvm_file)) { + ret = -EBADF; + goto out_fput; + } + + source_kvm = source_kvm_file->private_data; + ret = sev_lock_for_migration(source_kvm); + if (ret) + goto out_fput; + + if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + ret = -EINVAL; + goto out_source; + } + ret = sev_lock_vcpus_for_migration(kvm); + if (ret) + goto out_dst_vcpu; + ret = sev_lock_vcpus_for_migration(source_kvm); + if (ret) + goto out_source_vcpu; + + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); + kvm_for_each_vcpu(i, vcpu, source_kvm) { + kvm_vcpu_reset(vcpu, /* init_event= */ false); + } + ret = 0; + +out_source_vcpu: + sev_unlock_vcpus_for_migration(source_kvm); + +out_dst_vcpu: + sev_unlock_vcpus_for_migration(kvm); + +out_source: + sev_unlock_after_migration(source_kvm); +out_fput: + if (source_kvm_file) + fput(source_kvm_file); +out_unlock: + sev_unlock_after_migration(kvm); + return ret; +} + int svm_mem_enc_op(struct kvm *kvm, void __user *argp) { struct kvm_sev_cmd sev_cmd; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 68294491c23d..c2e25ae4757f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4637,6 +4637,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .mem_enc_unreg_region = svm_unregister_enc_region, .vm_copy_enc_context_from = svm_vm_copy_asid_from, + .vm_migrate_protected_vm_from = svm_vm_migrate_from, .can_emulate_instruction = svm_can_emulate_instruction, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 6d8d762d208f..d7b44b37dfcf 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -80,6 +80,7 @@ struct kvm_sev_info { u64 ap_jump_table; /* SEV-ES AP Jump Table address */ struct kvm *enc_context_owner; /* Owner of copied encryption context */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ + atomic_t migration_in_progress; }; struct kvm_svm { @@ -557,6 +558,7 @@ int svm_register_enc_region(struct kvm *kvm, int svm_unregister_enc_region(struct kvm *kvm, struct kvm_enc_region *range); int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd); +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd); void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_set_cpu_caps(void); void __init sev_hardware_setup(void); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0c8b5129effd..c80fa1d378c9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5665,6 +5665,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (kvm_x86_ops.vm_copy_enc_context_from) r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]); return r; + case KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM: + r = -EINVAL; + if (kvm_x86_ops.vm_migrate_protected_vm_from) + r = kvm_x86_ops.vm_migrate_protected_vm_from( + kvm, cap->args[0]); + return r; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { r = -EINVAL; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index a067410ebea5..77b292ed01c1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1112,6 +1112,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM 206 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Thu Oct 21 17:43:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12575885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E300C433EF for ; Thu, 21 Oct 2021 17:45:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D2FA615A3 for ; Thu, 21 Oct 2021 17:45:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232285AbhJURsB (ORCPT ); Thu, 21 Oct 2021 13:48:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232292AbhJURrK (ORCPT ); Thu, 21 Oct 2021 13:47:10 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08DC5C061220 for ; Thu, 21 Oct 2021 10:43:13 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id f6-20020a17090a654600b0019e585e8f6fso253456pjs.9 for ; Thu, 21 Oct 2021 10:43:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t2TzXOzaxxpB1r0axVc03dmlkYGdQb1MoD9TI8b+FpM=; b=Lo4R0SIABnEKWtPLq3zmyvItjcnNmf/4BPykBWra/2dkYB3e+NId5kGorRp0rczI6M mgIzD2od3iMK2q88PR3bqZTZcKEVkUtK2tNIYbnwzDN7GY7/QKh6j+DgGOiVCpIiOmzn bK7seoyUeH75t0Absc4uTTWajK6C4RBcVshkC2REIJI1Xuf5U9Gx1FZoXJkzSgUdmM5n fdpS5RYFrJvhsHJ8gYxhgi+wA7WG5OKmvwHzbTi+ZnZez0tURyNF/XtS4+tIMAw5Dfat Al/LIxYCuenBwX+366tziLIzUp2oC96pUCrX/2G4jPQPYA+SnR0Sni28dPpXRY+wi9mp Y1/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t2TzXOzaxxpB1r0axVc03dmlkYGdQb1MoD9TI8b+FpM=; b=WQywPZYhORKH4pXCocx/kfE4sFy81WpY7APAsATUVfSw6u9NFVLT8mS011oRdzxhlk 8bWFiafFF/GRkZ8qCTiUfMgkRZWjod9SnKhVdw2QiUEoLbW5rFqffccSCCiWXEDA0P+9 jzlbjxRm+9kiSpEJZklkahdBbNPr/g3/Om1bwyeZ+jVxUInj5G7SJe1jv0AyJ6OZQAui Jve6i3dgubD3CSGEt6QJo5fbFOuoY7MO6KD/xubvV9jgK/7O7LFFDXxBkDW7LlAOU+Y9 17NNJYU/qBpq00sgTxSibyE6/K1U/LqWHDTUc308GQDpKMjbE64I34ViRSzYTlMWqwR9 3BAg== X-Gm-Message-State: AOAM530z0DXQBqeVvkZ1hxxbRxquEtc4gX3blDR9vVeI1eO1z0hHhhKX t0lras8DzTwr0BzJomf8MRtwODavRH1DjFnKwJSOvqdlFKCvFitCYNu+wjOlHRhdr4SKje0MRDh Stf3Q5axImtmiW46nWFZJn8Du2Qv2waM7cz3QXVp1Wm3z1WOwm9OiIwYJjQ== X-Google-Smtp-Source: ABdhPJyKu1IrLXYtMhCOvmWuA/oOS6a+AMa/auEXa/DRWAj0G/mrPitEW+F7jwqvW9Yg5J61KK7WJWKj1aw= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:da2d:dcb2:6:add9]) (user=pgonda job=sendgmr) by 2002:a05:6a00:9a2:b0:44c:b979:afe3 with SMTP id u34-20020a056a0009a200b0044cb979afe3mr6941216pfg.61.1634838192336; Thu, 21 Oct 2021 10:43:12 -0700 (PDT) Date: Thu, 21 Oct 2021 10:43:01 -0700 In-Reply-To: <20211021174303.385706-1-pgonda@google.com> Message-Id: <20211021174303.385706-4-pgonda@google.com> Mime-Version: 1.0 References: <20211021174303.385706-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH V11 3/5] KVM: SEV: Add support for SEV-ES intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Tom Lendacky , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV-ES to work with intra host migration the VMSAs, GHCB metadata, and other SEV-ES info needs to be preserved along with the guest's memory. Signed-off-by: Peter Gonda Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/kvm/svm/sev.c | 50 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 49 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 2c2f724c9096..d8ce93fd1129 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1605,6 +1605,48 @@ static void sev_migrate_from(struct kvm_sev_info *dst, list_replace_init(&src->regions_list, &dst->regions_list); } +static int sev_es_migrate_from(struct kvm *dst, struct kvm *src) +{ + int i; + struct kvm_vcpu *dst_vcpu, *src_vcpu; + struct vcpu_svm *dst_svm, *src_svm; + + if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus)) + return -EINVAL; + + kvm_for_each_vcpu(i, src_vcpu, src) { + if (!src_vcpu->arch.guest_state_protected) + return -EINVAL; + } + + kvm_for_each_vcpu(i, src_vcpu, src) { + src_svm = to_svm(src_vcpu); + dst_vcpu = kvm_get_vcpu(dst, i); + dst_svm = to_svm(dst_vcpu); + + /* + * Transfer VMSA and GHCB state to the destination. Nullify and + * clear source fields as appropriate, the state now belongs to + * the destination. + */ + dst_vcpu->vcpu_id = src_vcpu->vcpu_id; + dst_svm->sev_es = src_svm->sev_es; + dst_svm->vmcb->control.ghcb_gpa = + src_svm->vmcb->control.ghcb_gpa; + dst_svm->vmcb->control.vmsa_pa = __pa(dst_svm->sev_es.vmsa); + dst_vcpu->arch.guest_state_protected = true; + + memset(&src_svm->sev_es, 0, sizeof(src_svm->sev_es)); + src_svm->vmcb->control.ghcb_gpa = 0; + src_svm->vmcb->control.vmsa_pa = 0; + src_vcpu->arch.guest_state_protected = false; + } + to_kvm_svm(src)->sev_info.es_active = false; + to_kvm_svm(dst)->sev_info.es_active = true; + + return 0; +} + int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) { struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; @@ -1633,7 +1675,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_fput; - if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + if (!sev_guest(source_kvm)) { ret = -EINVAL; goto out_source; } @@ -1644,6 +1686,12 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_source_vcpu; + if (sev_es_guest(source_kvm)) { + ret = sev_es_migrate_from(kvm, source_kvm); + if (ret) + goto out_source_vcpu; + } + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); kvm_for_each_vcpu (i, vcpu, source_kvm) { kvm_vcpu_reset(vcpu, /* init_event= */ false); From patchwork Thu Oct 21 17:43:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12575887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDEF8C433EF for ; Thu, 21 Oct 2021 17:46:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA1056135F for ; Thu, 21 Oct 2021 17:46:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232500AbhJURtK (ORCPT ); Thu, 21 Oct 2021 13:49:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232609AbhJURra (ORCPT ); Thu, 21 Oct 2021 13:47:30 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1DB3C0432CD for ; Thu, 21 Oct 2021 10:43:14 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id mv8-20020a17090b198800b001a183fb1da5so256354pjb.8 for ; Thu, 21 Oct 2021 10:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AO7v/MDuuWoYbVonp2h2kWvBn6hjhV7oaHGed9pGpg0=; b=l9PcbRMXtQj/cbkkWRNV3ONTYQDnsu7FphYvZJLyITOr41ZIh74mKwOnyKRiJcHDS5 c4puwpkqkiKcqs4nslw4btPffaNwv0xFwIg97llg8E1fUrpIw2NkEBAGRImUJJh8LWxF WB+o3AidFE6MLqE2DliHaLWDwYg9+0ofuBxvXga15B+hV2MuNdkxVRMEc6jLM7/vb5Hr GayQ7f65tllXw2mkXY69tPGNdPjJtivnIxtCx3nS7rH7lT8xvZ9oZgOl++fySrvXDWCL cij/xbsjjEtUJFmzazf7aAKlWRshNNyGHoVOG2K0qjJ2umq5WlA5Necaxq1uAOWa1SNK CvTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AO7v/MDuuWoYbVonp2h2kWvBn6hjhV7oaHGed9pGpg0=; b=nOB+5nMZ7vrHnRmONlU7YUjWaH2CMcz1xm4ocIj4Tik23LDoHX7n5y6f6i7daDkIS5 PUaGNzU51WCH3JIiUO14mRIJ83WlBWTWzQjLZya6ZzRwMcQPnW1/g5ZD/e7KuqvsTaH2 Yx+Rw4uH395lxNn2b0C27z9M81Yu9Co1QRXxUNYyJtaRm+g+OeORHq8Nu5yEiIh3WP1C UBrVD+zKbBvfyWLx1tWI2o6Ykt7NbtVjR5/8FRk5h+T7nZDFF/MlHDLQ2DNrSxAUpvdD +XehgButzWTBUi0wO5WkXYxrP2Knc5faJHEHLYSxTAjKpYwlTiogyryHB2DRwYQHKAVb KPRQ== X-Gm-Message-State: AOAM533a3/62VY3bmbta5ZLuUT4Sti2rcBGXjQvh+PJUKDi0DjXrpjh+ 5KxrwyU+XtHs7Kvl9WIlYQjuQAop4sfBJh8ar+hGmJTwNDm4VutyDrQFLVbAFqqPbHSi4ysaky2 ojYgK/reeV3zVZznjZPh0wjZ4C+yJB2mD7kQs4qzJxCccLQFOFIGPyC9iVA== X-Google-Smtp-Source: ABdhPJzZf0VveNt3eEAglzXtq6sfAuFvbVBqwyUIEANnvFi8/WFN4B50S7AI+Eg87cpDRUEd1gfrXtbSCqU= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:da2d:dcb2:6:add9]) (user=pgonda job=sendgmr) by 2002:a17:902:780f:b0:13a:3a88:f4cb with SMTP id p15-20020a170902780f00b0013a3a88f4cbmr6538277pll.68.1634838194102; Thu, 21 Oct 2021 10:43:14 -0700 (PDT) Date: Thu, 21 Oct 2021 10:43:02 -0700 In-Reply-To: <20211021174303.385706-1-pgonda@google.com> Message-Id: <20211021174303.385706-5-pgonda@google.com> Mime-Version: 1.0 References: <20211021174303.385706-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH V11 4/5] selftest: KVM: Add open sev dev helper From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , David Rientjes , Brijesh Singh , Tom Lendacky , Paolo Bonzini , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactors out open path support from open_kvm_dev_path_or_exit() and adds new helper for SEV device path. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Cc: Marc Orr Cc: Sean Christopherson Cc: David Rientjes Cc: Brijesh Singh Cc: Tom Lendacky Cc: Paolo Bonzini Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- .../testing/selftests/kvm/include/kvm_util.h | 1 + .../selftests/kvm/include/x86_64/svm_util.h | 2 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++-------- tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++ 4 files changed, 30 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 010b59b13917..368e88305046 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -80,6 +80,7 @@ struct vm_guest_mode_params { }; extern const struct vm_guest_mode_params vm_guest_mode_params[]; +int open_path_or_exit(const char *path, int flags); int open_kvm_dev_path_or_exit(void); int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h index b7531c83b8ae..587fbe408b99 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void) return ecx & CPUID_SVM; } +int open_sev_dev_path_or_exit(void); + #endif /* SELFTEST_KVM_SVM_UTILS_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 10a8ed691c66..06a6c04010fb 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -31,6 +31,19 @@ static void *align(void *x, size_t size) return (void *) (((size_t) x + mask) & ~mask); } +int open_path_or_exit(const char *path, int flags) +{ + int fd; + + fd = open(path, flags); + if (fd < 0) { + print_skip("%s not available (errno: %d)", path, errno); + exit(KSFT_SKIP); + } + + return fd; +} + /* * Open KVM_DEV_PATH if available, otherwise exit the entire program. * @@ -42,16 +55,7 @@ static void *align(void *x, size_t size) */ static int _open_kvm_dev_path_or_exit(int flags) { - int fd; - - fd = open(KVM_DEV_PATH, flags); - if (fd < 0) { - print_skip("%s not available, is KVM loaded? (errno: %d)", - KVM_DEV_PATH, errno); - exit(KSFT_SKIP); - } - - return fd; + return open_path_or_exit(KVM_DEV_PATH, flags); } int open_kvm_dev_path_or_exit(void) diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c index 2ac98d70d02b..14a8618efa9c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -13,6 +13,8 @@ #include "processor.h" #include "svm_util.h" +#define SEV_DEV_PATH "/dev/sev" + struct gpr64_regs guest_regs; u64 rflags; @@ -160,3 +162,14 @@ void nested_svm_check_supported(void) exit(KSFT_SKIP); } } + +/* + * Open SEV_DEV_PATH if available, otherwise exit the entire program. + * + * Return: + * The opened file descriptor of /dev/sev. + */ +int open_sev_dev_path_or_exit(void) +{ + return open_path_or_exit(SEV_DEV_PATH, 0); +} From patchwork Thu Oct 21 17:43:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12575889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8B9AC433F5 for ; Thu, 21 Oct 2021 17:47:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9229F6139F for ; Thu, 21 Oct 2021 17:47:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232473AbhJURtO (ORCPT ); Thu, 21 Oct 2021 13:49:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232615AbhJURra (ORCPT ); Thu, 21 Oct 2021 13:47:30 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89631C0432D0 for ; Thu, 21 Oct 2021 10:43:16 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id e6-20020a170902ed8600b0013f165d1b51so537327plj.15 for ; Thu, 21 Oct 2021 10:43:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=G+Eb+AKiudzJdpfkaf1MGAuq6eL6ODdOkLPKTe9XNic=; b=nX/NBARWooFEbcShaSrRxq0Xx9MibFPcZB/2i1Mn2Ln5zbeod1pVWq2LFu/qbyZMdt EAMAeWmMIFr8uxc67uN5IIb5jL49i4lKg/xofvh2CxGZsIkVrrgoPb5peWknV1iqThh9 Jgov+WQpSu/2u55SKHOsKxoSIpvlnylmre0J7l4d5vpCJgxPzNA1IXyCxEYaCh71uICI DOkiYpGwCLMSpobuVPvKgS6Np0B+I1YJz81K3rfa7H131nGPMeuXrc0OdsfAkIPvGQgp IPFoeG8Inlp4J1I2nxkTjWWFfPTX8NRl6uHmXj5JhN56pfFjLgG6QmmGWd/ETnlRoZwP jysA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=G+Eb+AKiudzJdpfkaf1MGAuq6eL6ODdOkLPKTe9XNic=; b=PLPfz53HHcd3C6U2yLNmrLOppSPcbNh88ClxNVVfgTq5+lMaWVwUW28AXSI4k7/iHa mrJXQrC/Soy3oTUL2hcKlqWh8rULNmObMzeGsY8OUZGvlJ2cb/SHvK55rXgDz5Wl8WrL T1Gn/O+rQ5Vf1LDcsooarnEg0sFuWMB4PWdnCqcVjpOySZEkjCJ7Jl/4Ge0f5bszH2at k1vKlColCec9TKz1g5zMeJeqSdgbW0B/K0rAK+Hto6SwK2arNGJxZASsyFrZ8U61Kqw5 IwdWQFhc0bL8ay6sFw4Ltf7Y+UPANa/uIURS/6hi/aGl7SxYz2tmPwk98o7AZ5cueXc+ hUUg== X-Gm-Message-State: AOAM530mjmCeKfW64qqdQCy+FKDY4no12TAI2EaWOz2tMsEgNtkkfQE7 n8NgK7wfLygcoRcwLA698WhoYB+RnCPTBeWipjWmYETMt5drpJMO9Y+sB70/HnBLAdUvjAj/Uv/ Ns8yNVR+Xq6QQPoJk11jMiwDccuGHTPf2cQaI4oJFfGhX7UTSN+Oolcb2vA== X-Google-Smtp-Source: ABdhPJzx9vLVdSN2d46+zIV9wo4+PMvRzQ01Cb1WkRwJG3OWv3V/pZhk6sgx5cScCDB8C/+pA46OrV5NhtI= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:da2d:dcb2:6:add9]) (user=pgonda job=sendgmr) by 2002:a63:7c5b:: with SMTP id l27mr5415408pgn.227.1634838195703; Thu, 21 Oct 2021 10:43:15 -0700 (PDT) Date: Thu, 21 Oct 2021 10:43:03 -0700 In-Reply-To: <20211021174303.385706-1-pgonda@google.com> Message-Id: <20211021174303.385706-6-pgonda@google.com> Mime-Version: 1.0 References: <20211021174303.385706-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH V11 5/5] selftest: KVM: Add intra host migration tests From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , David Rientjes , Brijesh Singh , Tom Lendacky , Paolo Bonzini , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds testcases for intra host migration for SEV and SEV-ES. Also adds locking test to confirm no deadlock exists. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Cc: Marc Orr Cc: Sean Christopherson Cc: David Rientjes Cc: Brijesh Singh Cc: Tom Lendacky Cc: Paolo Bonzini Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- tools/testing/selftests/kvm/Makefile | 3 +- .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++ 2 files changed, 205 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index d1774f461393..c6b37d2045d9 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -72,7 +72,8 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test -TEST_GEN_PROGS_x86_64 += access_tracking_perf_test +TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_perf_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c new file mode 100644 index 000000000000..ec3bbc96e73a --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c @@ -0,0 +1,203 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "kselftest.h" +#include "../lib/kvm_util_internal.h" + +#define SEV_POLICY_ES 0b100 + +#define NR_MIGRATE_TEST_VCPUS 4 +#define NR_MIGRATE_TEST_VMS 3 +#define NR_LOCK_TESTING_THREADS 3 +#define NR_LOCK_TESTING_ITERATIONS 10000 + +static void sev_ioctl(int vm_fd, int cmd_id, void *data) +{ + struct kvm_sev_cmd cmd = { + .id = cmd_id, + .data = (uint64_t)data, + .sev_fd = open_sev_dev_path_or_exit(), + }; + int ret; + + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd); + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS), + "%d failed: return code: %d, errno: %d, fw error: %d", + cmd_id, ret, errno, cmd.error); +} + +static struct kvm_vm *sev_vm_create(bool es) +{ + struct kvm_vm *vm; + struct kvm_sev_launch_start start = { 0 }; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + if (es) + start.policy |= SEV_POLICY_ES; + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start); + if (es) + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL); + return vm; +} + +static struct kvm_vm *__vm_create(void) +{ + struct kvm_vm *vm; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + + return vm; +} + +static int __sev_migrate_from(int dst_fd, int src_fd) +{ + struct kvm_enable_cap cap = { + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM, + .args = { src_fd } + }; + + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap); +} + + +static void sev_migrate_from(int dst_fd, int src_fd) +{ + int ret; + + ret = __sev_migrate_from(dst_fd, src_fd); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); +} + +static void test_sev_migrate_from(bool es) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS]; + int i; + + src_vm = sev_vm_create(es); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + dst_vms[i] = __vm_create(); + + /* Initial migration from the src to the first dst. */ + sev_migrate_from(dst_vms[0]->fd, src_vm->fd); + + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++) + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd); + + /* Migrate the guest back to the original VM. */ + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); + + kvm_vm_free(src_vm); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + kvm_vm_free(dst_vms[i]); +} + +struct locking_thread_input { + struct kvm_vm *vm; + int source_fds[NR_LOCK_TESTING_THREADS]; +}; + +static void *locking_test_thread(void *arg) +{ + int i, j; + struct locking_thread_input *input = (struct locking_test_thread *)arg; + + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) { + j = i % NR_LOCK_TESTING_THREADS; + __sev_migrate_from(input->vm->fd, input->source_fds[j]); + } + + return NULL; +} + +static void test_sev_migrate_locking(void) +{ + struct locking_thread_input input[NR_LOCK_TESTING_THREADS]; + pthread_t pt[NR_LOCK_TESTING_THREADS]; + int i; + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) { + input[i].vm = sev_vm_create(/* es= */ false); + input[0].source_fds[i] = input[i].vm->fd; + } + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i) + memcpy(input[i].source_fds, input[0].source_fds, + sizeof(input[i].source_fds)); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_join(pt[i], NULL); +} + +static void test_sev_migrate_parameters(void) +{ + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev, + *sev_es_vm_no_vmsa; + int ret; + + sev_vm = sev_vm_create(/* es= */ false); + sev_es_vm = sev_vm_create(/* es= */ true); + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + vm_no_sev = __vm_create(); + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL); + vm_vcpu_add(sev_es_vm_no_vmsa, 1); + + + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd); + TEST_ASSERT(ret == -1 && errno == EINVAL, + "Migrations require SEV enabled. ret %d, errno: %d\n", ret, + errno); +} + +int main(int argc, char *argv[]) +{ + test_sev_migrate_from(/* es= */ false); + test_sev_migrate_from(/* es= */ true); + test_sev_migrate_locking(); + test_sev_migrate_parameters(); + return 0; +}