From patchwork Tue Oct 12 20:48:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12553645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F17C433FE for ; Tue, 12 Oct 2021 20:49:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDD876023F for ; Tue, 12 Oct 2021 20:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235088AbhJLUvL (ORCPT ); Tue, 12 Oct 2021 16:51:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234895AbhJLUvH (ORCPT ); Tue, 12 Oct 2021 16:51:07 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42D1CC061749 for ; Tue, 12 Oct 2021 13:49:05 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id b17-20020a17090a551100b001a03bb6c4f1so2244357pji.5 for ; Tue, 12 Oct 2021 13:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/yCNNVtqwbMHjyMJKJPMNrURYVzL3NY/gquQXp7ZeYI=; b=ZlNGFF+jXm/Z1xZfK4iVv3rEMQjTY+JqjFfIb/zFPQBCdZZkGCqQOuyPZV48U3rV+7 pdFkiIXbv4kebqHv/HizDq87ORPFVJ+nWGuey765dU/nL+2K5qgmrEwT/HBLVEGstBU7 wCiiwZnCxQs801Y76pzu1KqL9RwiBiedJF2K2dVxZhYOj2w+MERn7uJqrn3VoKvZaTDv 736nBMqr8VZzVYGnVSgLZ9NhVW1gapnxPS5mGNU6lOZvz34hW12b+uEOTlTpMIA1E/wv pYrbgRYSb0s+yPCPWZxxHm099/NQ53Y0xGYZDt1+3Et9psaWh8lRAeg5piVJH73I1OnF akew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/yCNNVtqwbMHjyMJKJPMNrURYVzL3NY/gquQXp7ZeYI=; b=Z8YPuOTAAU9CUkekH6atExboyEEn96loAmPdvUWIF9ia+ElYBppTf+j8RWOnKJorJK HvBM8o7WVnUhWJTnC6qoN8qRhl/ZR8iBWWhQnD1/nWN6GmUph94rrIRaItlwhsLi96MM FnrrbXdLITOdvfyhCzPF6qR8B+e23bzQCPHVtXKihxl9S8H3Fq1/cY0QsZg04mjjwBHS 8BfZkzqcHncNelSM4Fcoe4pG56vDgJRbJeVmdRrg36C7YXDnbtiUOWuRcKuQtqBPJWDS VQLuW3SZYWS3OccyyPb3Bvitvcsot9Dqr+GaWU3D6oq+bf0trcUa1bTsW5f10hw2wN+i /JqQ== X-Gm-Message-State: AOAM530/o4mG9tOf/P6z0jWEuIst2a/ryJOM21CM13W1B5A2KP2j9iwE AGpn3bQessYK5WWbc4euXK3zQsvQKin6VKBGAAfGlTTvK/X2ovnogPjmIaQyuw1ZssMcLrhqOgW X4fe90BBzSE4ca7xrp9eUDT4DZAkK/FcgkbVFSpR2IQLISK/h3eoKDtYh7A== X-Google-Smtp-Source: ABdhPJxbcphxp42PR5CBDm1j/DFgzgzipxeACQElV1ilQWC2e+XZ+Jto01L9W+rBZ58IBNfRj/6cUkUWXwQ= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:bab5:e2c:2623:d2f8]) (user=pgonda job=sendgmr) by 2002:a17:90a:430e:: with SMTP id q14mr8523944pjg.55.1634071744523; Tue, 12 Oct 2021 13:49:04 -0700 (PDT) Date: Tue, 12 Oct 2021 13:48:54 -0700 In-Reply-To: <20211012204858.3614961-1-pgonda@google.com> Message-Id: <20211012204858.3614961-2-pgonda@google.com> Mime-Version: 1.0 References: <20211012204858.3614961-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 1/5 V10] KVM: SEV: Refactor out sev_es_state struct From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Tom Lendacky , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move SEV-ES vCPU metadata into new sev_es_state struct from vcpu_svm. Signed-off-by: Peter Gonda Suggested-by: Tom Lendacky Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 81 +++++++++++++++++++++--------------------- arch/x86/kvm/svm/svm.c | 8 ++--- arch/x86/kvm/svm/svm.h | 26 ++++++++------ 3 files changed, 60 insertions(+), 55 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 1e8b26b93b4f..d920677c1357 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -590,7 +590,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) * traditional VMSA as it has been built so far (in prep * for LAUNCH_UPDATE_VMSA) to be the initial SEV-ES state. */ - memcpy(svm->vmsa, save, sizeof(*save)); + memcpy(svm->sev_es.vmsa, save, sizeof(*save)); return 0; } @@ -612,11 +612,11 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, * the VMSA memory content (i.e it will write the same memory region * with the guest's key), so invalidate it first. */ - clflush_cache_range(svm->vmsa, PAGE_SIZE); + clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE); vmsa.reserved = 0; vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; - vmsa.address = __sme_pa(svm->vmsa); + vmsa.address = __sme_pa(svm->sev_es.vmsa); vmsa.len = PAGE_SIZE; return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); } @@ -2026,16 +2026,16 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) svm = to_svm(vcpu); if (vcpu->arch.guest_state_protected) - sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE); - __free_page(virt_to_page(svm->vmsa)); + sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE); + __free_page(virt_to_page(svm->sev_es.vmsa)); - if (svm->ghcb_sa_free) - kfree(svm->ghcb_sa); + if (svm->sev_es.ghcb_sa_free) + kfree(svm->sev_es.ghcb_sa); } static void dump_ghcb(struct vcpu_svm *svm) { - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; unsigned int nbits; /* Re-use the dump_invalid_vmcb module parameter */ @@ -2061,7 +2061,7 @@ static void dump_ghcb(struct vcpu_svm *svm) static void sev_es_sync_to_ghcb(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; /* * The GHCB protocol so far allows for the following data @@ -2081,7 +2081,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; struct kvm_vcpu *vcpu = &svm->vcpu; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; u64 exit_code; /* @@ -2128,7 +2128,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) struct ghcb *ghcb; u64 exit_code = 0; - ghcb = svm->ghcb; + ghcb = svm->sev_es.ghcb; /* Only GHCB Usage code 0 is supported */ if (ghcb->ghcb_usage) @@ -2246,33 +2246,34 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) void sev_es_unmap_ghcb(struct vcpu_svm *svm) { - if (!svm->ghcb) + if (!svm->sev_es.ghcb) return; - if (svm->ghcb_sa_free) { + if (svm->sev_es.ghcb_sa_free) { /* * The scratch area lives outside the GHCB, so there is a * buffer that, depending on the operation performed, may * need to be synced, then freed. */ - if (svm->ghcb_sa_sync) { + if (svm->sev_es.ghcb_sa_sync) { kvm_write_guest(svm->vcpu.kvm, - ghcb_get_sw_scratch(svm->ghcb), - svm->ghcb_sa, svm->ghcb_sa_len); - svm->ghcb_sa_sync = false; + ghcb_get_sw_scratch(svm->sev_es.ghcb), + svm->sev_es.ghcb_sa, + svm->sev_es.ghcb_sa_len); + svm->sev_es.ghcb_sa_sync = false; } - kfree(svm->ghcb_sa); - svm->ghcb_sa = NULL; - svm->ghcb_sa_free = false; + kfree(svm->sev_es.ghcb_sa); + svm->sev_es.ghcb_sa = NULL; + svm->sev_es.ghcb_sa_free = false; } - trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->ghcb); + trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->sev_es.ghcb); sev_es_sync_to_ghcb(svm); - kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true); - svm->ghcb = NULL; + kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map, true); + svm->sev_es.ghcb = NULL; } void pre_sev_run(struct vcpu_svm *svm, int cpu) @@ -2302,7 +2303,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) { struct vmcb_control_area *control = &svm->vmcb->control; - struct ghcb *ghcb = svm->ghcb; + struct ghcb *ghcb = svm->sev_es.ghcb; u64 ghcb_scratch_beg, ghcb_scratch_end; u64 scratch_gpa_beg, scratch_gpa_end; void *scratch_va; @@ -2338,7 +2339,7 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) return false; } - scratch_va = (void *)svm->ghcb; + scratch_va = (void *)svm->sev_es.ghcb; scratch_va += (scratch_gpa_beg - control->ghcb_gpa); } else { /* @@ -2368,12 +2369,12 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) * the vCPU next time (i.e. a read was requested so the data * must be written back to the guest memory). */ - svm->ghcb_sa_sync = sync; - svm->ghcb_sa_free = true; + svm->sev_es.ghcb_sa_sync = sync; + svm->sev_es.ghcb_sa_free = true; } - svm->ghcb_sa = scratch_va; - svm->ghcb_sa_len = len; + svm->sev_es.ghcb_sa = scratch_va; + svm->sev_es.ghcb_sa_len = len; return true; } @@ -2492,15 +2493,15 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) return -EINVAL; } - if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->ghcb_map)) { + if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) { /* Unable to map GHCB from guest */ vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n", ghcb_gpa); return -EINVAL; } - svm->ghcb = svm->ghcb_map.hva; - ghcb = svm->ghcb_map.hva; + svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva; + ghcb = svm->sev_es.ghcb_map.hva; trace_kvm_vmgexit_enter(vcpu->vcpu_id, ghcb); @@ -2523,7 +2524,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) ret = kvm_sev_es_mmio_read(vcpu, control->exit_info_1, control->exit_info_2, - svm->ghcb_sa); + svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_MMIO_WRITE: if (!setup_vmgexit_scratch(svm, false, control->exit_info_2)) @@ -2532,7 +2533,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) ret = kvm_sev_es_mmio_write(vcpu, control->exit_info_1, control->exit_info_2, - svm->ghcb_sa); + svm->sev_es.ghcb_sa); break; case SVM_VMGEXIT_NMI_COMPLETE: ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET); @@ -2583,7 +2584,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in) return -EINVAL; return kvm_sev_es_string_io(&svm->vcpu, size, port, - svm->ghcb_sa, svm->ghcb_sa_len, in); + svm->sev_es.ghcb_sa, svm->sev_es.ghcb_sa_len, in); } void sev_es_init_vmcb(struct vcpu_svm *svm) @@ -2598,7 +2599,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm) * VMCB page. Do not include the encryption mask on the VMSA physical * address since hardware will access it using the guest key. */ - svm->vmcb->control.vmsa_pa = __pa(svm->vmsa); + svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); /* Can't intercept CR register access, HV can't modify CR registers */ svm_clr_intercept(svm, INTERCEPT_CR0_READ); @@ -2670,8 +2671,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) struct vcpu_svm *svm = to_svm(vcpu); /* First SIPI: Use the values as initially set by the VMM */ - if (!svm->received_first_sipi) { - svm->received_first_sipi = true; + if (!svm->sev_es.received_first_sipi) { + svm->sev_es.received_first_sipi = true; return; } @@ -2680,8 +2681,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a * non-zero value. */ - if (!svm->ghcb) + if (!svm->sev_es.ghcb) return; - ghcb_set_sw_exit_info_2(svm->ghcb, 1); + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 89077160d463..0396c2308a75 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1450,7 +1450,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm_switch_vmcb(svm, &svm->vmcb01); if (vmsa_page) - svm->vmsa = page_address(vmsa_page); + svm->sev_es.vmsa = page_address(vmsa_page); svm->guest_state_loaded = false; @@ -2833,11 +2833,11 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err) { struct vcpu_svm *svm = to_svm(vcpu); - if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->ghcb)) + if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->sev_es.ghcb)) return kvm_complete_insn_gp(vcpu, err); - ghcb_set_sw_exit_info_1(svm->ghcb, 1); - ghcb_set_sw_exit_info_2(svm->ghcb, + ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 1); + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, X86_TRAP_GP | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0d7bbe548ac3..80048841cad9 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -123,6 +123,20 @@ struct svm_nested_state { bool initialized; }; +struct vcpu_sev_es_state { + /* SEV-ES support */ + struct vmcb_save_area *vmsa; + struct ghcb *ghcb; + struct kvm_host_map ghcb_map; + bool received_first_sipi; + + /* SEV-ES scratch area support */ + void *ghcb_sa; + u64 ghcb_sa_len; + bool ghcb_sa_sync; + bool ghcb_sa_free; +}; + struct vcpu_svm { struct kvm_vcpu vcpu; /* vmcb always points at current_vmcb->ptr, it's purely a shorthand. */ @@ -186,17 +200,7 @@ struct vcpu_svm { DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS); } shadow_msr_intercept; - /* SEV-ES support */ - struct vmcb_save_area *vmsa; - struct ghcb *ghcb; - struct kvm_host_map ghcb_map; - bool received_first_sipi; - - /* SEV-ES scratch area support */ - void *ghcb_sa; - u64 ghcb_sa_len; - bool ghcb_sa_sync; - bool ghcb_sa_free; + struct vcpu_sev_es_state sev_es; bool guest_state_loaded; }; From patchwork Tue Oct 12 20:48:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12553647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D10BC4332F for ; Tue, 12 Oct 2021 20:49:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 10CB56023F for ; Tue, 12 Oct 2021 20:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235130AbhJLUvM (ORCPT ); Tue, 12 Oct 2021 16:51:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234896AbhJLUvK (ORCPT ); Tue, 12 Oct 2021 16:51:10 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48757C061746 for ; Tue, 12 Oct 2021 13:49:08 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id q3-20020aa79823000000b0044d24283b63so283816pfl.5 for ; Tue, 12 Oct 2021 13:49:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B/iaG6y5yfpjGl2eOODBfisY0bjawHeB4EVUjuJA86I=; b=hoPYkI8MsDbGQM9MPYNnvc1YmLA92dydSxo3J3Dxr5lZ52oDR3y+xQFDHFElCmn3tw C6ePlPE0sWi7pnJfEAcfsn3Tpd+T4ozuWegqDke0B2gyAkwLyyA5lURA0Zx0kq4UbHWf 1dH/bfHz0xiFtX5mkumSrFk18y1ktT2EhJHjt7VceyH2dQO/hy5D46l1VY8ZuswRY6Nt YVTXcaeNDbVFqd3566/c9ERb+5k4YXegGmpCQXCTNRom3wMbuxKgw1B6BCNobm12/dCf THuKrkLiwqgwYE/0INmi5HeoYj99KMZWuBumwVIgKPzR42iryVLdWqKuJdOT3LRD7jRp +5gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B/iaG6y5yfpjGl2eOODBfisY0bjawHeB4EVUjuJA86I=; b=wumHO3+7bI/lREadlbZI8QzrK2aB8RkryI8a/nlY460MCao1ltwgDs8ZUeA9FRgZ07 dmGD/VD8+et4wRNYAkJE8Si8gjF4+XToO/l3h1QFeiL73U3BcR5vPFvWfHZHBI8r+NeC BUfNziT1nsxrQ8xisxTZM6UFgX3nuPTZO4akpPLUijxR8P8a4AlfVYYjiwVdDeLs64v8 EC7gmNCTDCdU+2qQqw6MyLOddK9o8TpnnNlb8anfHHt1hVPlc/fDbfekXu83XPA1ZoDy gjw0eAo57mPToGnAJZBMKyiXDUSp30q0rL9/Zsb0hgtd10Sz2qf0Uh7zMPTeZKutN7IZ /X6Q== X-Gm-Message-State: AOAM5335hP/CADpCL3gillLGvGoYLxYLPdbh0RSdkiedXZcE+Fa19X5O cUnxgV3cztRqDD8Z6kmHpTphOzDS6/N9PtMklCZWlJfml78z8oxQJ5b8Rt5uL1Hnv1iALFsCEdc atpggKhfUw78F3CqRlaZx8DKQUBlvrWL1uTtWzJuVc1aff8ekDYgS7iO5/g== X-Google-Smtp-Source: ABdhPJzdxos79OfdyHOuF/YWGUDutzGT+RNcP3/9mB8WU9kuGHys05BF42D98iehKQ6hrtMfg6BhN1Yd+Ag= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:bab5:e2c:2623:d2f8]) (user=pgonda job=sendgmr) by 2002:a05:6a00:ac1:b0:44c:4dc6:b897 with SMTP id c1-20020a056a000ac100b0044c4dc6b897mr34117416pfl.25.1634071747614; Tue, 12 Oct 2021 13:49:07 -0700 (PDT) Date: Tue, 12 Oct 2021 13:48:55 -0700 In-Reply-To: <20211012204858.3614961-1-pgonda@google.com> Message-Id: <20211012204858.3614961-3-pgonda@google.com> Mime-Version: 1.0 References: <20211012204858.3614961-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 2/5 V10] KVM: SEV: Add support for SEV intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Tom Lendacky , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV to work with intra host migration, contents of the SEV info struct such as the ASID (used to index the encryption key in the AMD SP) and the list of memory regions need to be transferred to the target VM. This change adds a commands for a target VMM to get a source SEV VM's sev info. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Documentation/virt/kvm/api.rst | 15 ++++ arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 137 ++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 2 + arch/x86/kvm/x86.c | 6 ++ include/uapi/linux/kvm.h | 1 + 7 files changed, 163 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 3b093d6dbe22..d9797c6d4b1d 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6911,6 +6911,21 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM +------------------------------------- + +Architectures: x86 SEV enabled +Type: vm +Parameters: args[0] is the fd of the source vm +Returns: 0 on success + +This capability enables userspace to migrate the encryption context from the VM +indicated by the fd to the VM this is called on. + +This is intended to support intra-host migration of VMs between userspace VMMs. +in-guest workloads scheduled by the host. This allows for upgrading the VMM +process without interrupting the guest. + 8. Other capabilities. ====================== diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88f0326c184a..a334e6b36309 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1481,6 +1481,7 @@ struct kvm_x86_ops { int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); + int (*vm_migrate_protected_vm_from)(struct kvm *kvm, unsigned int source_fd); int (*get_msr_feature)(struct kvm_msr_entry *entry); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index d920677c1357..42ff1ccfe1dc 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1524,6 +1524,143 @@ static bool cmd_allowed_from_miror(u32 cmd_id) return false; } +static int sev_lock_for_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + /* + * Bail if this VM is already involved in a migration to avoid deadlock + * between two VMs trying to migrate to/from each other. + */ + if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) + return -EBUSY; + + mutex_lock(&kvm->lock); + + return 0; +} + +static void sev_unlock_after_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + mutex_unlock(&kvm->lock); + atomic_set_release(&sev->migration_in_progress, 0); +} + + +static int sev_lock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i, j; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (mutex_lock_killable(&vcpu->mutex)) + goto out_unlock; + } + + return 0; + +out_unlock: + kvm_for_each_vcpu(j, vcpu, kvm) { + if (i == j) + break; + + mutex_unlock(&vcpu->mutex); + } + return -EINTR; +} + +static void sev_unlock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + mutex_unlock(&vcpu->mutex); + } +} + +static void sev_migrate_from(struct kvm_sev_info *dst, + struct kvm_sev_info *src) +{ + dst->active = true; + dst->asid = src->asid; + dst->misc_cg = src->misc_cg; + dst->handle = src->handle; + dst->pages_locked = src->pages_locked; + + src->asid = 0; + src->active = false; + src->handle = 0; + src->pages_locked = 0; + src->misc_cg = NULL; + + INIT_LIST_HEAD(&dst->regions_list); + list_replace_init(&src->regions_list, &dst->regions_list); +} + +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) +{ + struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; + struct file *source_kvm_file; + struct kvm *source_kvm; + struct kvm_vcpu *vcpu; + int i, ret; + + ret = sev_lock_for_migration(kvm); + if (ret) + return ret; + + if (sev_guest(kvm)) { + ret = -EINVAL; + goto out_unlock; + } + + source_kvm_file = fget(source_fd); + if (!file_is_kvm(source_kvm_file)) { + ret = -EBADF; + goto out_fput; + } + + source_kvm = source_kvm_file->private_data; + ret = sev_lock_for_migration(source_kvm); + if (ret) + goto out_fput; + + if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + ret = -EINVAL; + goto out_source; + } + ret = sev_lock_vcpus_for_migration(kvm); + if (ret) + goto out_dst_vcpu; + ret = sev_lock_vcpus_for_migration(source_kvm); + if (ret) + goto out_source_vcpu; + + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); + kvm_for_each_vcpu(i, vcpu, source_kvm) { + kvm_vcpu_reset(vcpu, /* init_event= */ false); + } + ret = 0; + +out_source_vcpu: + sev_unlock_vcpus_for_migration(source_kvm); + +out_dst_vcpu: + sev_unlock_vcpus_for_migration(kvm); + +out_source: + sev_unlock_after_migration(source_kvm); +out_fput: + if (source_kvm_file) + fput(source_kvm_file); +out_unlock: + sev_unlock_after_migration(kvm); + return ret; +} + int svm_mem_enc_op(struct kvm *kvm, void __user *argp) { struct kvm_sev_cmd sev_cmd; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0396c2308a75..16e4db05a6f3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4697,6 +4697,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .mem_enc_unreg_region = svm_unregister_enc_region, .vm_copy_enc_context_from = svm_vm_copy_asid_from, + .vm_migrate_protected_vm_from = svm_vm_migrate_from, .can_emulate_instruction = svm_can_emulate_instruction, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 80048841cad9..d4eae06b0695 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -80,6 +80,7 @@ struct kvm_sev_info { u64 ap_jump_table; /* SEV-ES AP Jump Table address */ struct kvm *enc_context_owner; /* Owner of copied encryption context */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ + atomic_t migration_in_progress; }; struct kvm_svm { @@ -562,6 +563,7 @@ int svm_register_enc_region(struct kvm *kvm, int svm_unregister_enc_region(struct kvm *kvm, struct kvm_enc_region *range); int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd); +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd); void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_set_cpu_caps(void); void __init sev_hardware_setup(void); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 196ac33ef958..093deb784b6b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5806,6 +5806,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (kvm_x86_ops.vm_copy_enc_context_from) r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]); return r; + case KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM: + r = -EINVAL; + if (kvm_x86_ops.vm_migrate_protected_vm_from) + r = kvm_x86_ops.vm_migrate_protected_vm_from( + kvm, cap->args[0]); + return r; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { r = -EINVAL; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5ca5ffe16cb4..dabd143aad8f 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1120,6 +1120,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM 206 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Oct 12 20:48:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12553649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5401C433F5 for ; Tue, 12 Oct 2021 20:49:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8709B60C4B for ; Tue, 12 Oct 2021 20:49:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235310AbhJLUvQ (ORCPT ); Tue, 12 Oct 2021 16:51:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235102AbhJLUvM (ORCPT ); Tue, 12 Oct 2021 16:51:12 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06AA0C061745 for ; Tue, 12 Oct 2021 13:49:10 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id r17-20020a17090a941100b001a06fd0be74so2250997pjo.3 for ; Tue, 12 Oct 2021 13:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dfsL+iXBjUioFCqGNYlyj7PB9YeXysOPmz3Bz3RLVkk=; b=WZg1IIHC1aPm3BjNXTCiSg4SBOMMwKVRSe5LeWcOZ2FaZkRS9CbuoYJio/fFrXhQRv 5MaPO4jsjCsYSVcJs9Ge7sq4vZjNx5aZatQWQVRgbXpGHLt+5A6o20iX7jDgrhhBboUp 8xMcVzHN0UZ9KaFh5RHFRhCClh+2iuR64E0DkDdNZcJnfMmr7EWXCMXX3x6Hrhns7JDq JQ/pjsFqTm7G3ZSdMzYfnMEo1PqaSeeRfN2tgBvnTaKwDCiuKvgPqH/0xDtYFtaSpSqv dkcBNntJxE5GbXdxBhsG3IE79ufY7LegqaxMorJjHA7fLPQWbRPgz2ttaHNQCIOkj2DU jiMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dfsL+iXBjUioFCqGNYlyj7PB9YeXysOPmz3Bz3RLVkk=; b=cHRvisTYFZXDt6IC2y1C8MJeSwPNNOpbGyM0c0SnVx7x7QtOMUlzDi5uFpueTkiJlG jNcRHqqDnSfiyLwfucyOZeGbMhME9t0x1xeH+WcScbRilZWXI/8Y12QvvAhKMKvU9Gxf 7a3P6g/zRW5+sOBQ/Ao8ISz4sjqmNumwKXbCr4hc8SQEABqDn9/J0J0OesPfGKDVQcuX Z/kO4AA/RuZSPSRMaMjFP90O9ZVSB3wGKihLPfq9Y9ksptFVtD/0HkME+6Kqhttl9v8p qGw80nAvxNGeE9Nfoo2id5A0JYszu2NRRTvIyJrjR9qZT1DSZcJij/BYh9Os9Y9V0B6N 9uqw== X-Gm-Message-State: AOAM533vUaGU3Hxlq4SUjpfBTJxzI7aIplm8Dpuo/2sDOQxKNBANMaDg CAKPnrCDUUkXUa4r3+t2J1ZA+59Sev/9Ba1lKM8hikveMNBA9PvHuQ2YuBBOP6SPG57Mk/N4Ntd L/33R9RyjIL1O7hI3RF2r2aFuJmfme237OhG3wUnA4IvEJvikbKNvC4m3CQ== X-Google-Smtp-Source: ABdhPJz0E0bRfj1GVgTeGXUQRLuRoq4yuzjInSQnGH+da8icoWg4zUtjwqYxPVcL9Y0xPGMqjbuV+qmL3zk= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:bab5:e2c:2623:d2f8]) (user=pgonda job=sendgmr) by 2002:a63:5956:: with SMTP id j22mr24424660pgm.58.1634071749373; Tue, 12 Oct 2021 13:49:09 -0700 (PDT) Date: Tue, 12 Oct 2021 13:48:56 -0700 In-Reply-To: <20211012204858.3614961-1-pgonda@google.com> Message-Id: <20211012204858.3614961-4-pgonda@google.com> Mime-Version: 1.0 References: <20211012204858.3614961-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 3/5 V10] KVM: SEV: Add support for SEV-ES intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Tom Lendacky , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV-ES to work with intra host migration the VMSAs, GHCB metadata, and other SEV-ES info needs to be preserved along with the guest's memory. Signed-off-by: Peter Gonda Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Tom Lendacky Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/kvm/svm/sev.c | 48 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 42ff1ccfe1dc..a486ab08a766 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1600,6 +1600,46 @@ static void sev_migrate_from(struct kvm_sev_info *dst, list_replace_init(&src->regions_list, &dst->regions_list); } +static int sev_es_migrate_from(struct kvm *dst, struct kvm *src) +{ + int i; + struct kvm_vcpu *dst_vcpu, *src_vcpu; + struct vcpu_svm *dst_svm, *src_svm; + + if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus)) + return -EINVAL; + + kvm_for_each_vcpu(i, src_vcpu, src) { + if (!src_vcpu->arch.guest_state_protected) + return -EINVAL; + } + + kvm_for_each_vcpu(i, src_vcpu, src) { + src_svm = to_svm(src_vcpu); + dst_vcpu = kvm_get_vcpu(dst, i); + dst_svm = to_svm(dst_vcpu); + + /* + * Transfer VMSA and GHCB state to the destination. Nullify and + * clear source fields as appropriate, the state now belongs to + * the destination. + */ + dst_vcpu->vcpu_id = src_vcpu->vcpu_id; + memcpy(&dst_svm->sev_es, &src_svm->sev_es, + sizeof(dst_svm->sev_es)); + dst_svm->vmcb->control.ghcb_gpa = + src_svm->vmcb->control.ghcb_gpa; + dst_svm->vmcb->control.vmsa_pa = __pa(dst_svm->sev_es.vmsa); + dst_vcpu->arch.guest_state_protected = true; + src_svm->vmcb->control.ghcb_gpa = 0; + src_svm->vmcb->control.vmsa_pa = 0; + src_vcpu->arch.guest_state_protected = false; + } + to_kvm_svm(src)->sev_info.es_active = false; + + return 0; +} + int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) { struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; @@ -1628,7 +1668,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_fput; - if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + if (!sev_guest(source_kvm)) { ret = -EINVAL; goto out_source; } @@ -1639,6 +1679,12 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_source_vcpu; + if (sev_es_guest(source_kvm)) { + ret = sev_es_migrate_from(kvm, source_kvm); + if (ret) + goto out_source_vcpu; + } + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); kvm_for_each_vcpu (i, vcpu, source_kvm) { kvm_vcpu_reset(vcpu, /* init_event= */ false); From patchwork Tue Oct 12 20:48:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12553651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5C79C4332F for ; Tue, 12 Oct 2021 20:49:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E07460C4B for ; Tue, 12 Oct 2021 20:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235322AbhJLUvR (ORCPT ); Tue, 12 Oct 2021 16:51:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235170AbhJLUvO (ORCPT ); Tue, 12 Oct 2021 16:51:14 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F39E2C061746 for ; Tue, 12 Oct 2021 13:49:11 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b9-20020a5b07890000b0290558245b7eabso736135ybq.10 for ; Tue, 12 Oct 2021 13:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5rpbBBxXYnQlKlnVXmOdTaySI3jmj1Cw0Iss9XL2zks=; b=gElW6d9WHAnykPbijck4HL/tN67aQq1ATwA2uNw6a4+YIuGWts+7E0vh05z61cOpen ocWO3g2vlDUCrV7SZpzAazUrILZBfFLzsObrGmLjJdcFzOohe63yFaq+3qJSII5CDeZ8 74XVlK69bD82ti2lENcGBd7XppFYIO/7ssq0n7YGxi+IKP2r3lbJL1XmDAGoidHwZbh2 TpADVVSY/1RGX9IzK92uViQK5PfTX9CoB5x4I0IuG1upGVMHVZKdKXrXWOruTjdE2c2s h2WOg6EQh1lrsKvgZtTLxQBG/bltc4+2ynf3/R5RgAta+Eu31hY+nY6MzARw4iTWv92t uDzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5rpbBBxXYnQlKlnVXmOdTaySI3jmj1Cw0Iss9XL2zks=; b=yn35XF1yQ239JxPdul5wAlfSvErZ11MshriAq+ngZk2E/E37p5J1LUcFyFnQ4+Cx9q YbWugJrx0/hFAucfTHKyTxbj2/VEwO1Xdm32ze707v2S1/HRwL+JOsFUUMPZuYHPnVGY 6H0lCJLlDEFFd09BStpb/8MYuLzwUZO86nQqOCDxPlWTTyFXJ86iIPYsrt+SSDhGIGIo 1yYEHDaiawJgiZP6u2D4iBMPztCprtD/Fotx2UMy2YzojOhHEejRMQ2/Alo1Bc4k9vtT b+s+YunYbVzTE5+GkFuDc/Jji26Rde50b503dsJ+Xx7Vsocn9ZhZmDbPb8za8xk4yufW x1oA== X-Gm-Message-State: AOAM533KcpKEcZgcHLE4eVJqbwRFPTrvPfehLWwH7onwoSYJpn2RvL4T bfwtt82IifnB/24aNBYQCWnBh63uef/w9HiZp7dUFy86GVDCloRLGQsepJ2APHxF20Wc2b6GEHB WsxEUm2/8Zv3zerkqOr7M4hV4w7asxX3fobqa9XKms+toWy92V0dmZbjK5Q== X-Google-Smtp-Source: ABdhPJx93TFToUDDdvJSQFzb2D1wjxhb26KvL1sSgu9zdjuHkSVDSxSHcaBX9dzwA+fEY/goqYfh89Ub/3I= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:bab5:e2c:2623:d2f8]) (user=pgonda job=sendgmr) by 2002:a25:a105:: with SMTP id z5mr31004516ybh.247.1634071751188; Tue, 12 Oct 2021 13:49:11 -0700 (PDT) Date: Tue, 12 Oct 2021 13:48:57 -0700 In-Reply-To: <20211012204858.3614961-1-pgonda@google.com> Message-Id: <20211012204858.3614961-5-pgonda@google.com> Mime-Version: 1.0 References: <20211012204858.3614961-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 4/5 V10] selftest: KVM: Add open sev dev helper From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , David Rientjes , Brijesh Singh , Tom Lendacky , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactors out open path support from open_kvm_dev_path_or_exit() and adds new helper for SEV device path. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Sean Christopherson Cc: David Rientjes Cc: Brijesh Singh Cc: Tom Lendacky Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- .../testing/selftests/kvm/include/kvm_util.h | 1 + .../selftests/kvm/include/x86_64/svm_util.h | 2 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++-------- tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++ 4 files changed, 30 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 1b3ef5757819..adf4fa274808 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -82,6 +82,7 @@ struct vm_guest_mode_params { }; extern const struct vm_guest_mode_params vm_guest_mode_params[]; +int open_path_or_exit(const char *path, int flags); int open_kvm_dev_path_or_exit(void); int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h index b7531c83b8ae..587fbe408b99 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void) return ecx & CPUID_SVM; } +int open_sev_dev_path_or_exit(void); + #endif /* SELFTEST_KVM_SVM_UTILS_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 0fe66ca6139a..ea88e6b14670 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -31,6 +31,19 @@ static void *align(void *x, size_t size) return (void *) (((size_t) x + mask) & ~mask); } +int open_path_or_exit(const char *path, int flags) +{ + int fd; + + fd = open(path, flags); + if (fd < 0) { + print_skip("%s not available (errno: %d)", path, errno); + exit(KSFT_SKIP); + } + + return fd; +} + /* * Open KVM_DEV_PATH if available, otherwise exit the entire program. * @@ -42,16 +55,7 @@ static void *align(void *x, size_t size) */ static int _open_kvm_dev_path_or_exit(int flags) { - int fd; - - fd = open(KVM_DEV_PATH, flags); - if (fd < 0) { - print_skip("%s not available, is KVM loaded? (errno: %d)", - KVM_DEV_PATH, errno); - exit(KSFT_SKIP); - } - - return fd; + return open_path_or_exit(KVM_DEV_PATH, flags); } int open_kvm_dev_path_or_exit(void) diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c index 2ac98d70d02b..14a8618efa9c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -13,6 +13,8 @@ #include "processor.h" #include "svm_util.h" +#define SEV_DEV_PATH "/dev/sev" + struct gpr64_regs guest_regs; u64 rflags; @@ -160,3 +162,14 @@ void nested_svm_check_supported(void) exit(KSFT_SKIP); } } + +/* + * Open SEV_DEV_PATH if available, otherwise exit the entire program. + * + * Return: + * The opened file descriptor of /dev/sev. + */ +int open_sev_dev_path_or_exit(void) +{ + return open_path_or_exit(SEV_DEV_PATH, 0); +} From patchwork Tue Oct 12 20:48:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12553653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3003CC433FE for ; Tue, 12 Oct 2021 20:49:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EDE960C4B for ; Tue, 12 Oct 2021 20:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235383AbhJLUvT (ORCPT ); Tue, 12 Oct 2021 16:51:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234890AbhJLUvQ (ORCPT ); Tue, 12 Oct 2021 16:51:16 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 916EBC061753 for ; Tue, 12 Oct 2021 13:49:13 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id nv1-20020a17090b1b4100b001a04861d474so427748pjb.5 for ; Tue, 12 Oct 2021 13:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Rr9ytuYp2vfYLZr7DumY3iLmcE0StEfKju8exz269og=; b=esk3m2rlhXiLaWEHXb41kSqo2UWtF7v8Se/7/K+kOowewn7TC04CwYbw/2CUsw1Y+S Y53kPOLIH/hv2DMmLrcn0sAk6CJ3cDjaockIf/kiHZE6Q9pmYj/BgJhAtlhCcQ4p4t1m 8y1bScN/07eXIqwDv80IqUyz7Y/KNMDoRdqOa1EZo4pshY8jdNgXgFjeWO7npG1G2x+2 tOhzGqUD3oCTjL2A6ibJAJ1r6O4QbqccrwAMVw5sRb14mI/FS52oPEI04h32tqngBIiX 8Ivelyoj8hMKjBoeDfiiOtcbo14oVbe608ABkn1FHQlSU2YvlRWEJ59NFUH9HU4syTi6 6EmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Rr9ytuYp2vfYLZr7DumY3iLmcE0StEfKju8exz269og=; b=CsxNX62YnEYx7Y1jqwcEbj9/OxjOIe4DJ9r+Pwgi+edvGj3UxxWa2NwOkAxpQf50u/ vQTpizKguGSKHRD0LNAhYFGw2CNPjLOeX+1T0LAQ7+JLEQ4YkUaew7gvOpqbJv77sCLJ etrkjjgvCNrXNDGIaE3T0lPNqfbqgaKP2Y3AOttLtgpSokBYY8jHJimIZlq6VjJiZ7+5 bFbifEI6y6bysdBeohl5DwUGkfMCvim5jsEob1oO7tlsspOeXVSeb4OZ4BM/38WAZ59L fsqwPzg6I0KasxvKjTbYFnKy+xPMLhpaMOB2kmUFksy0UodBn9K0vRwPTPTb/39GNNj0 jOsA== X-Gm-Message-State: AOAM533u7LXCE+Ea3q48d9m0pKviB5+bAlZ5PVhpDlrfi69A58mvpPGZ 9aHms2fcEaeE5JEXqc0qG1JnE6fM0ZclF1TXlC6h45jPCFfbzhuR8wzKEVYAeaRcxN18k1o60Bw iTyFHGiu2ejkctoWTeK3mEXUOjLp4iOY5gdacCj/rcNToMvagK9N3P1QvmA== X-Google-Smtp-Source: ABdhPJz9NKEfJMBeqoHHCaTaxqgcXFuEL2Pi37P+wVD1HtaPQFT8DLgcK2VmOORsR2s3ChBQNJf7tbGplQs= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:bab5:e2c:2623:d2f8]) (user=pgonda job=sendgmr) by 2002:a17:902:8a97:b0:13e:6e77:af59 with SMTP id p23-20020a1709028a9700b0013e6e77af59mr31634852plo.4.1634071752947; Tue, 12 Oct 2021 13:49:12 -0700 (PDT) Date: Tue, 12 Oct 2021 13:48:58 -0700 In-Reply-To: <20211012204858.3614961-1-pgonda@google.com> Message-Id: <20211012204858.3614961-6-pgonda@google.com> Mime-Version: 1.0 References: <20211012204858.3614961-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 5/5 V10] selftest: KVM: Add intra host migration tests From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , David Rientjes , Brijesh Singh , Tom Lendacky , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds testcases for intra host migration for SEV and SEV-ES. Also adds locking test to confirm no deadlock exists. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Sean Christopherson Cc: David Rientjes Cc: Brijesh Singh Cc: Tom Lendacky Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- tools/testing/selftests/kvm/Makefile | 3 +- .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++ 2 files changed, 205 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index fd20f271aac0..e7c218bfa25e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -73,7 +73,8 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test -TEST_GEN_PROGS_x86_64 += access_tracking_perf_test +TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_perf_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c new file mode 100644 index 000000000000..ec3bbc96e73a --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c @@ -0,0 +1,203 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "kselftest.h" +#include "../lib/kvm_util_internal.h" + +#define SEV_POLICY_ES 0b100 + +#define NR_MIGRATE_TEST_VCPUS 4 +#define NR_MIGRATE_TEST_VMS 3 +#define NR_LOCK_TESTING_THREADS 3 +#define NR_LOCK_TESTING_ITERATIONS 10000 + +static void sev_ioctl(int vm_fd, int cmd_id, void *data) +{ + struct kvm_sev_cmd cmd = { + .id = cmd_id, + .data = (uint64_t)data, + .sev_fd = open_sev_dev_path_or_exit(), + }; + int ret; + + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd); + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS), + "%d failed: return code: %d, errno: %d, fw error: %d", + cmd_id, ret, errno, cmd.error); +} + +static struct kvm_vm *sev_vm_create(bool es) +{ + struct kvm_vm *vm; + struct kvm_sev_launch_start start = { 0 }; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + if (es) + start.policy |= SEV_POLICY_ES; + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start); + if (es) + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL); + return vm; +} + +static struct kvm_vm *__vm_create(void) +{ + struct kvm_vm *vm; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + + return vm; +} + +static int __sev_migrate_from(int dst_fd, int src_fd) +{ + struct kvm_enable_cap cap = { + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM, + .args = { src_fd } + }; + + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap); +} + + +static void sev_migrate_from(int dst_fd, int src_fd) +{ + int ret; + + ret = __sev_migrate_from(dst_fd, src_fd); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); +} + +static void test_sev_migrate_from(bool es) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS]; + int i; + + src_vm = sev_vm_create(es); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + dst_vms[i] = __vm_create(); + + /* Initial migration from the src to the first dst. */ + sev_migrate_from(dst_vms[0]->fd, src_vm->fd); + + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++) + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd); + + /* Migrate the guest back to the original VM. */ + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); + + kvm_vm_free(src_vm); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + kvm_vm_free(dst_vms[i]); +} + +struct locking_thread_input { + struct kvm_vm *vm; + int source_fds[NR_LOCK_TESTING_THREADS]; +}; + +static void *locking_test_thread(void *arg) +{ + int i, j; + struct locking_thread_input *input = (struct locking_test_thread *)arg; + + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) { + j = i % NR_LOCK_TESTING_THREADS; + __sev_migrate_from(input->vm->fd, input->source_fds[j]); + } + + return NULL; +} + +static void test_sev_migrate_locking(void) +{ + struct locking_thread_input input[NR_LOCK_TESTING_THREADS]; + pthread_t pt[NR_LOCK_TESTING_THREADS]; + int i; + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) { + input[i].vm = sev_vm_create(/* es= */ false); + input[0].source_fds[i] = input[i].vm->fd; + } + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i) + memcpy(input[i].source_fds, input[0].source_fds, + sizeof(input[i].source_fds)); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_join(pt[i], NULL); +} + +static void test_sev_migrate_parameters(void) +{ + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev, + *sev_es_vm_no_vmsa; + int ret; + + sev_vm = sev_vm_create(/* es= */ false); + sev_es_vm = sev_vm_create(/* es= */ true); + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + vm_no_sev = __vm_create(); + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL); + vm_vcpu_add(sev_es_vm_no_vmsa, 1); + + + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd); + TEST_ASSERT(ret == -1 && errno == EINVAL, + "Migrations require SEV enabled. ret %d, errno: %d\n", ret, + errno); +} + +int main(int argc, char *argv[]) +{ + test_sev_migrate_from(/* es= */ false); + test_sev_migrate_from(/* es= */ true); + test_sev_migrate_locking(); + test_sev_migrate_parameters(); + return 0; +}