From patchwork Tue Oct 5 14:13:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12536887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67ADAC4332F for ; Tue, 5 Oct 2021 14:15:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51513613D5 for ; Tue, 5 Oct 2021 14:15:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235652AbhJEORC (ORCPT ); Tue, 5 Oct 2021 10:17:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235591AbhJEOQz (ORCPT ); Tue, 5 Oct 2021 10:16:55 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6D75C06139D for ; Tue, 5 Oct 2021 07:14:08 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o15-20020a17090ac08f00b0019fafa34327so1647228pjs.3 for ; Tue, 05 Oct 2021 07:14:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t2Nyc947fq1fyt/+C1EvB7mfKiK13Hmxu0sij0YInK8=; b=OM3RhU0EDcnX9C8D3ifKmf3VIjb/hO19JGMU2lHvE78VXsoG7c0CYW1LqPI89hm0Lh hEtjT3XnUwBpvYdexu/WfTQjGDzDPw0mT9bPuNaEWlpWom95PVKZsYCLE0DVHT5G+Xym 5mUxnKLNOWOUtKr6ljpv8rTcRtf7FBLuOs4gqafKnPR7UAr77FYLm/pjbsFhWEqOgURM qe+RP2o9FlgU6JmfRt7W0VKoMYYEnY5qkBFyoFRyWWSF0BVmuOSZfqhxRM3yJkyQBz2F R4B9bApZD7Ot+GEe5fzRW5xG+AOgyaPEjNYncG2dBPexgdqGITH2pwwspZhqSlEyPkWr Pc1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t2Nyc947fq1fyt/+C1EvB7mfKiK13Hmxu0sij0YInK8=; b=xpM8figt4InnTMnJ/sL5NBZ/Z+Q5TjIleT4288XOABNUUEr2ydqDmkFyfK44ONoUu4 AxQ7E+5vWTk8zm7A/YrJ0ssWff5nez+vQqNGVfJUNyFSjnlREP4LskmxoRYVmaTeKepU H+F8ZTF5mUhdIyqbDIIC4Xh5Bd2phC7PK9gCOorq4GntavZWVlBVXzHFGa2Zl1Gl79ij Kv8khHtKGBx0Ce4LAAMRyItMkNRDxpmTHsExTiV9rF3sU3wv0YS6sBG9uisP2iq7cCmo OrYrHQ5SuXa+qtHYP3Ars+qwEHdCkO2A94lfYNinwCJlSQTNSERkQlgvNvOeGcCgf7jJ WlTw== X-Gm-Message-State: AOAM531/ucAuM8zmJSThaBMDHghYxCq+odJMUfmFoGQTkX9P3uk3KsUt vMwsbAkzTMFEB+Lb7Dh9UZtFaJ3DxVQaz2eTukxL8fiHp+Qk6SOhpl/jE86rV7uuDvduqj0oSvi U7WrEJWtRMTQmjKL+YXcKpHCRghL6Rlv0TEe+Cr6V27xk+0G4y4A9Iu0GpA== X-Google-Smtp-Source: ABdhPJxW/THgaNUAdXE8ypd/LgicBQwKNexp8QJqTdWGHpQ3sZQ2UwJicstU7IDcy/lXN2qcZ4degKKBulw= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:225f:7216:6111:7f1c]) (user=pgonda job=sendgmr) by 2002:a17:90b:30cb:: with SMTP id hi11mr4131742pjb.51.1633443248182; Tue, 05 Oct 2021 07:14:08 -0700 (PDT) Date: Tue, 5 Oct 2021 07:13:54 -0700 In-Reply-To: <20211005141357.2393627-1-pgonda@google.com> Message-Id: <20211005141357.2393627-2-pgonda@google.com> Mime-Version: 1.0 References: <20211005141357.2393627-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH 1/4 V9] KVM: SEV: Add support for SEV intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV to work with intra host migration, contents of the SEV info struct such as the ASID (used to index the encryption key in the AMD SP) and the list of memory regions need to be transferred to the target VM. This change adds a commands for a target VMM to get a source SEV VM's sev info. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Documentation/virt/kvm/api.rst | 15 ++++ arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 137 ++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 2 + arch/x86/kvm/x86.c | 6 ++ include/uapi/linux/kvm.h | 1 + 7 files changed, 163 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 3b093d6dbe22..d9797c6d4b1d 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6911,6 +6911,21 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM +------------------------------------- + +Architectures: x86 SEV enabled +Type: vm +Parameters: args[0] is the fd of the source vm +Returns: 0 on success + +This capability enables userspace to migrate the encryption context from the VM +indicated by the fd to the VM this is called on. + +This is intended to support intra-host migration of VMs between userspace VMMs. +in-guest workloads scheduled by the host. This allows for upgrading the VMM +process without interrupting the guest. + 8. Other capabilities. ====================== diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88f0326c184a..a334e6b36309 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1481,6 +1481,7 @@ struct kvm_x86_ops { int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); + int (*vm_migrate_protected_vm_from)(struct kvm *kvm, unsigned int source_fd); int (*get_msr_feature)(struct kvm_msr_entry *entry); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 69f1155ae45f..8cf6c475f866 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1524,6 +1524,143 @@ static bool cmd_allowed_from_miror(u32 cmd_id) return false; } +static int sev_lock_for_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + /* + * Bail if this VM is already involved in a migration to avoid deadlock + * between two VMs trying to migrate to/from each other. + */ + if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) + return -EBUSY; + + mutex_lock(&kvm->lock); + + return 0; +} + +static void sev_unlock_after_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + mutex_unlock(&kvm->lock); + atomic_set_release(&sev->migration_in_progress, 0); +} + + +static int sev_lock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i, j; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (mutex_lock_killable(&vcpu->mutex)) + goto out_unlock; + } + + return 0; + +out_unlock: + kvm_for_each_vcpu(j, vcpu, kvm) { + if (i == j) + break; + + mutex_unlock(&vcpu->mutex); + } + return -EINTR; +} + +static void sev_unlock_vcpus_for_migration(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + int i; + + kvm_for_each_vcpu(i, vcpu, kvm) { + mutex_unlock(&vcpu->mutex); + } +} + +static void sev_migrate_from(struct kvm_sev_info *dst, + struct kvm_sev_info *src) +{ + dst->active = true; + dst->asid = src->asid; + dst->misc_cg = src->misc_cg; + dst->handle = src->handle; + dst->pages_locked = src->pages_locked; + + src->asid = 0; + src->active = false; + src->handle = 0; + src->pages_locked = 0; + src->misc_cg = NULL; + + INIT_LIST_HEAD(&dst->regions_list); + list_replace_init(&src->regions_list, &dst->regions_list); +} + +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) +{ + struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; + struct file *source_kvm_file; + struct kvm *source_kvm; + struct kvm_vcpu *vcpu; + int i, ret; + + ret = sev_lock_for_migration(kvm); + if (ret) + return ret; + + if (sev_guest(kvm)) { + ret = -EINVAL; + goto out_unlock; + } + + source_kvm_file = fget(source_fd); + if (!file_is_kvm(source_kvm_file)) { + ret = -EBADF; + goto out_fput; + } + + source_kvm = source_kvm_file->private_data; + ret = sev_lock_for_migration(source_kvm); + if (ret) + goto out_fput; + + if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + ret = -EINVAL; + goto out_source; + } + ret = sev_lock_vcpus_for_migration(kvm); + if (ret) + goto out_dst_vcpu; + ret = sev_lock_vcpus_for_migration(source_kvm); + if (ret) + goto out_source_vcpu; + + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); + kvm_for_each_vcpu(i, vcpu, source_kvm) { + kvm_vcpu_reset(vcpu, /* init_event= */ false); + } + ret = 0; + +out_source_vcpu: + sev_unlock_vcpus_for_migration(source_kvm); + +out_dst_vcpu: + sev_unlock_vcpus_for_migration(kvm); + +out_source: + sev_unlock_after_migration(source_kvm); +out_fput: + if (source_kvm_file) + fput(source_kvm_file); +out_unlock: + sev_unlock_after_migration(kvm); + return ret; +} + int svm_mem_enc_op(struct kvm *kvm, void __user *argp) { struct kvm_sev_cmd sev_cmd; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 89077160d463..1bda39844773 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4697,6 +4697,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .mem_enc_unreg_region = svm_unregister_enc_region, .vm_copy_enc_context_from = svm_vm_copy_asid_from, + .vm_migrate_protected_vm_from = svm_vm_migrate_from, .can_emulate_instruction = svm_can_emulate_instruction, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0d7bbe548ac3..064e7c7f2834 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -80,6 +80,7 @@ struct kvm_sev_info { u64 ap_jump_table; /* SEV-ES AP Jump Table address */ struct kvm *enc_context_owner; /* Owner of copied encryption context */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ + atomic_t migration_in_progress; }; struct kvm_svm { @@ -558,6 +559,7 @@ int svm_register_enc_region(struct kvm *kvm, int svm_unregister_enc_region(struct kvm *kvm, struct kvm_enc_region *range); int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd); +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd); void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_set_cpu_caps(void); void __init sev_hardware_setup(void); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 196ac33ef958..093deb784b6b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5806,6 +5806,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (kvm_x86_ops.vm_copy_enc_context_from) r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]); return r; + case KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM: + r = -EINVAL; + if (kvm_x86_ops.vm_migrate_protected_vm_from) + r = kvm_x86_ops.vm_migrate_protected_vm_from( + kvm, cap->args[0]); + return r; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { r = -EINVAL; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5ca5ffe16cb4..dabd143aad8f 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1120,6 +1120,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM 206 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Oct 5 14:13:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12536885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E0ACC433EF for ; Tue, 5 Oct 2021 14:15:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4650A611C5 for ; Tue, 5 Oct 2021 14:15:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236286AbhJEORE (ORCPT ); Tue, 5 Oct 2021 10:17:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235796AbhJEOQ4 (ORCPT ); Tue, 5 Oct 2021 10:16:56 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D36FDC0612E4 for ; Tue, 5 Oct 2021 07:14:10 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id t4-20020a62ea04000000b0044b333f5d1bso11117211pfh.20 for ; Tue, 05 Oct 2021 07:14:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+f6h/X5NIPeStfEfpLAEhC/0Xz4xmxvZvD1XBnS1esg=; b=Gxcp9Gzwc/lNaty7s6TDJEDgZ65H1sDm8BcZowlpWcMDIpeGcvJv0a8TQYbF7rXkKw Kk9khZ3ADiEuzh9XJLf6RCJrkLDfuOohH1mDhYmKDLm29n23mFPlIuLeSqUhVloklvES zsQC0YPpy5fi2EaZAcCs7/+HAhOK79tDnw0KeJHUAk3sxUySvQHZ85znucUs9A23+wNP iak/Kw+HVyJgHgds9TNyAFHAXFj+NP4+FzrEmSjG0alX6zRMoMFgtI3AntJOGVhcbNeu LwPwarui/C1iA+3pIYndxKeDgGhCNoM7mrLw1wURntRx2x5aXXzqQXJjc18QMwezdCQB gj3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+f6h/X5NIPeStfEfpLAEhC/0Xz4xmxvZvD1XBnS1esg=; b=wf28sqEDwvPEvMT+6649MpSr8FGFZjWGkv4XFOapTJoF5kgRG/N5kTk1MqBPjgIY7/ 1tqxDtKcVQLt8ZkNW55lFj5zo84TsSz6iXTP4ZQLTQVrhMntz51Y5K+uk+e89pnrN/IT xT6uSIz/yGR2YR86Rf7BaFvwfkuvwchSEterN+7Q99OaEzvMGOJwq7qXISsDijS5u4/2 zhxM2LJouMlTdYpOe8n4QHahNHDCTx2KDu4G8BfqMec4R0CijswlyfUjNq+u5ObkklU2 ffQ2w5VTVIExBiA1UUFfYJFHwE4XaxlBTMrye35sUAu5LjnvqrbzTAJuF0j1ZMU8FwBB 7y8g== X-Gm-Message-State: AOAM530C+3RGYokwG5gqy5G9K6kehOaugaI+78gVB4jSSI7Tb42eosLC AF8oLRurGUOQ7KPJd3NwVhTgGihxQROW3cZWpqL8QoSPAak7GpK7kT8mGFlpWpSNOzIPImIHEf1 osQvdRMIUEc2U3Qh2mjrMEoyA+x3bovvWJo7D90ZLVvGwxR10IRuwn4aHog== X-Google-Smtp-Source: ABdhPJxDXErdRZQa2QB8UPrMEyZ1lrAwikd4GlvsAsCthyJfdpiy5SSEH/ULjZ+EDQ0QgbPvKs7WzcdT5ko= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:225f:7216:6111:7f1c]) (user=pgonda job=sendgmr) by 2002:a17:902:d718:b0:13d:e2ec:1741 with SMTP id w24-20020a170902d71800b0013de2ec1741mr5445631ply.38.1633443250229; Tue, 05 Oct 2021 07:14:10 -0700 (PDT) Date: Tue, 5 Oct 2021 07:13:55 -0700 In-Reply-To: <20211005141357.2393627-1-pgonda@google.com> Message-Id: <20211005141357.2393627-3-pgonda@google.com> Mime-Version: 1.0 References: <20211005141357.2393627-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH 2/4 V9] KVM: SEV: Add support for SEV-ES intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV-ES to work with intra host migration the VMSAs, GHCB metadata, and other SEV-ES info needs to be preserved along with the guest's memory. Signed-off-by: Peter Gonda Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/kvm/svm/sev.c | 53 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 52 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 6fc1935b52ea..321b55654f36 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1576,6 +1576,51 @@ static void sev_migrate_from(struct kvm_sev_info *dst, list_replace_init(&src->regions_list, &dst->regions_list); } +static int sev_es_migrate_from(struct kvm *dst, struct kvm *src) +{ + int i; + struct kvm_vcpu *dst_vcpu, *src_vcpu; + struct vcpu_svm *dst_svm, *src_svm; + + if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus)) + return -EINVAL; + + kvm_for_each_vcpu(i, src_vcpu, src) { + if (!src_vcpu->arch.guest_state_protected) + return -EINVAL; + } + + kvm_for_each_vcpu(i, src_vcpu, src) { + src_svm = to_svm(src_vcpu); + dst_vcpu = dst->vcpus[i]; + dst_vcpu = kvm_get_vcpu(dst, i); + dst_svm = to_svm(dst_vcpu); + + /* + * Transfer VMSA and GHCB state to the destination. Nullify and + * clear source fields as appropriate, the state now belongs to + * the destination. + */ + dst_vcpu->vcpu_id = src_vcpu->vcpu_id; + dst_svm->vmsa = src_svm->vmsa; + src_svm->vmsa = NULL; + dst_svm->ghcb = src_svm->ghcb; + src_svm->ghcb = NULL; + dst_svm->vmcb->control.ghcb_gpa = src_svm->vmcb->control.ghcb_gpa; + dst_svm->ghcb_sa = src_svm->ghcb_sa; + src_svm->ghcb_sa = NULL; + dst_svm->ghcb_sa_len = src_svm->ghcb_sa_len; + src_svm->ghcb_sa_len = 0; + dst_svm->ghcb_sa_sync = src_svm->ghcb_sa_sync; + src_svm->ghcb_sa_sync = false; + dst_svm->ghcb_sa_free = src_svm->ghcb_sa_free; + src_svm->ghcb_sa_free = false; + } + to_kvm_svm(src)->sev_info.es_active = false; + + return 0; +} + int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) { struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; @@ -1604,7 +1649,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_fput; - if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + if (!sev_guest(source_kvm)) { ret = -EINVAL; goto out_source; } @@ -1615,6 +1660,12 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_source_vcpu; + if (sev_es_guest(source_kvm)) { + ret = sev_es_migrate_from(kvm, source_kvm); + if (ret) + goto out_source_vcpu; + } + sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); kvm_for_each_vcpu (i, vcpu, source_kvm) { kvm_vcpu_reset(vcpu, /* init_event= */ false); From patchwork Tue Oct 5 14:13:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12536889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F5BBC43217 for ; Tue, 5 Oct 2021 14:15:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5971D6115B for ; Tue, 5 Oct 2021 14:15:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235307AbhJEORG (ORCPT ); Tue, 5 Oct 2021 10:17:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235421AbhJEOQ6 (ORCPT ); Tue, 5 Oct 2021 10:16:58 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0395AC0612A3 for ; Tue, 5 Oct 2021 07:14:13 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i21-20020a253b15000000b005b9c0fbba45so11139298yba.20 for ; Tue, 05 Oct 2021 07:14:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=L0F0DtkdIIrbs8f7DAXmFKruhVt9GF1SIWpOBzoL0lA=; b=enoXjhm8FV8eYVvHHIqW1l2YQ4FkRKbHUMo2M2/ikjHiI84dsyYHfDoI5+nOBJYzKZ WrGW50vemb6hntMAQfOirRtxZwA8w6Y7zPBINqK47RDHWLmJXbnEMSdURS3rgH2juLEf MI7YkVIGi4Z4EuSTb6IkUcwqEtkpawnx3bchGaUkZ3WiglFW3FDD6360mIBTvNuJrSSb 7Wq6n/towapHS7032Ucumk/8Qf2bBGulN+JvilxNT9zT5I2bcwhxDtFsJ6FKBB8UZoMT 6s7M3xVe0RbO8OCoATx5+MoY7XGCsSSWHyN9DXoysencDUsrUgGQB0yORoydZVu89C8w w21A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=L0F0DtkdIIrbs8f7DAXmFKruhVt9GF1SIWpOBzoL0lA=; b=irg0FdAJOLpR48CnsnswuqvTlkCl59ChdHcTT3CBFhlz9wjvv+3PWLu3E3TInXxpaF AzVhEIKGnurcGprtEizNl8jOoAxCPmvIPJv+MQzRQ1YRVWdJIbsWWBqQOgxo+xJ1l/z/ FjOTiYP0XFva6K+Nmla80ZEqm2mGt/7wlxJQxdCr1h6Mm1PW7RcHeQTqsn9kOnNG+3dU AbihBeJgqF9b8x3KeGIdriTCSNNSQMBtcVqVUIV5W1wIVbq/P9pdcNKFGzcuuT3M5u6+ 8vFCQyeG5vXs12ZPTo7Ut4avKhSbdghkOUpuVBc1aiPEJtAfOPCdysko07EqIGESw4RG Q8Jg== X-Gm-Message-State: AOAM533e0sK1ogkk9ZQ7hHcoKL0fZ9HtSuve9Nw/S/Zk8NIKIXfwg6BU X/LTXQ2SEY0Yg/NoVM432GZSrLpCyKbR5eVI1du3bzzpbYh9rLoOiqxb2m+BTdyCiilO73GQ7KZ iDC5qxKrudRqhqLInTTu9lZ8i5gDFE6qynn2HI+VgLipmse6MOXlaeMGTmg== X-Google-Smtp-Source: ABdhPJwdZLTb4dD44FIpStZdluAYYFEIGjipl50EkD6h+SzduEAT8p9G8XOL8D1aeuIcBR22fel86jUFvBk= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:225f:7216:6111:7f1c]) (user=pgonda job=sendgmr) by 2002:a25:4788:: with SMTP id u130mr21060260yba.489.1633443252162; Tue, 05 Oct 2021 07:14:12 -0700 (PDT) Date: Tue, 5 Oct 2021 07:13:56 -0700 In-Reply-To: <20211005141357.2393627-1-pgonda@google.com> Message-Id: <20211005141357.2393627-4-pgonda@google.com> Mime-Version: 1.0 References: <20211005141357.2393627-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH 3/4 V9] selftest: KVM: Add open sev dev helper From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactors out open path support from open_kvm_dev_path_or_exit() and adds new helper for SEV device path. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- .../testing/selftests/kvm/include/kvm_util.h | 1 + .../selftests/kvm/include/x86_64/svm_util.h | 2 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++-------- tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++ 4 files changed, 30 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 010b59b13917..368e88305046 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -80,6 +80,7 @@ struct vm_guest_mode_params { }; extern const struct vm_guest_mode_params vm_guest_mode_params[]; +int open_path_or_exit(const char *path, int flags); int open_kvm_dev_path_or_exit(void); int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h index b7531c83b8ae..587fbe408b99 100644 --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h @@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void) return ecx & CPUID_SVM; } +int open_sev_dev_path_or_exit(void); + #endif /* SELFTEST_KVM_SVM_UTILS_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 10a8ed691c66..06a6c04010fb 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -31,6 +31,19 @@ static void *align(void *x, size_t size) return (void *) (((size_t) x + mask) & ~mask); } +int open_path_or_exit(const char *path, int flags) +{ + int fd; + + fd = open(path, flags); + if (fd < 0) { + print_skip("%s not available (errno: %d)", path, errno); + exit(KSFT_SKIP); + } + + return fd; +} + /* * Open KVM_DEV_PATH if available, otherwise exit the entire program. * @@ -42,16 +55,7 @@ static void *align(void *x, size_t size) */ static int _open_kvm_dev_path_or_exit(int flags) { - int fd; - - fd = open(KVM_DEV_PATH, flags); - if (fd < 0) { - print_skip("%s not available, is KVM loaded? (errno: %d)", - KVM_DEV_PATH, errno); - exit(KSFT_SKIP); - } - - return fd; + return open_path_or_exit(KVM_DEV_PATH, flags); } int open_kvm_dev_path_or_exit(void) diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c index 2ac98d70d02b..14a8618efa9c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c @@ -13,6 +13,8 @@ #include "processor.h" #include "svm_util.h" +#define SEV_DEV_PATH "/dev/sev" + struct gpr64_regs guest_regs; u64 rflags; @@ -160,3 +162,14 @@ void nested_svm_check_supported(void) exit(KSFT_SKIP); } } + +/* + * Open SEV_DEV_PATH if available, otherwise exit the entire program. + * + * Return: + * The opened file descriptor of /dev/sev. + */ +int open_sev_dev_path_or_exit(void) +{ + return open_path_or_exit(SEV_DEV_PATH, 0); +} From patchwork Tue Oct 5 14:13:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12536891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA78FC433EF for ; Tue, 5 Oct 2021 14:15:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D73B46113A for ; Tue, 5 Oct 2021 14:15:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235871AbhJEORI (ORCPT ); Tue, 5 Oct 2021 10:17:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235358AbhJEOQ6 (ORCPT ); Tue, 5 Oct 2021 10:16:58 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C24AC0612A5 for ; Tue, 5 Oct 2021 07:14:14 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id p19-20020a634f53000000b002877a03b293so12503297pgl.10 for ; Tue, 05 Oct 2021 07:14:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YUtrtiwbKfY9rDzT+0XeYzKLaqbwCbEKDHs2gDl3TcQ=; b=mJH8mNs3mSfInuaKeg8/apHMOo6n0c/5ddij1DG6793INyJ7r4yxavZMHLYGSdYxEx bz1T8L/exv1bN9SviQoo3URk9iHGG3xoup+pR0W7bjK0ll/kYKGPsmwrAw7OrNEz11lN Tp6rV+pANiwMt+HZTpxgJP8Q2NDuOc8XvszWIpGifJpvC2U/SXGl6V2UBUiXWF3BdZxz NinxI8CsKrmRII6SzxlmyqCTCQo4bQWXuTtVGYy6IDxWMbGoJuTRC4AXGMGhY6J+6O3j rsghr0ZMcz3CRaCsTSAKcxBggKHcg+mAJcQxdhzEGOBji8kbc3UFRAS8xNW3iSEvUQ1X OhZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YUtrtiwbKfY9rDzT+0XeYzKLaqbwCbEKDHs2gDl3TcQ=; b=JK5XD77Q5V2Xha2h8DsRhS+KdhzKOy1Y96DEMy1zIO91+OcCekjrISzkLTyer+6+R1 3LjI54Lp4hworuZJDs9SfFCvSpbcRoDu1u67MfGqfdyEvvPu5/J9jyQdRjzNOabWdMdc Dk09FeXbZ7y5mG8BEOi05vP28sBP1yI+5o3B1FOzLg8QWUR024gwfZ4vn3V4yU+SqCkI jvIrD2kPZuahRNahxIs3Hd8ucn7+1as3w7+PmADUfJmYsPmE3bV+VTy4Sek81+Ct7o0G pN/ccdQM/kXNxAMq7nAe0GERY8IjwsB2MtUMp8datTMtrkaiVMuYZEz3wByfXdkVMXI5 FbIw== X-Gm-Message-State: AOAM533glRn2CT6tJwF4Gys0d3yS/rVP3gj2yohUB3g693SQ9oiOT9Qa ntZh1sC2nE8BK+ZGzfHbwOAF/anBLbQJUWlRxCQ8M/pRyWANT1Rj6eyv/mm20aQC/xwBb50G9bC zL1kYz2A/+0ck9i7xVt+GhBt+aNbBsVDYGs9NS+UN9JBE4Df1GNwV4g0QAQ== X-Google-Smtp-Source: ABdhPJxCNOMoQnphKdlx7pHLYLIuQ/O/Y7g8lZcH016VxxOTUsfTOU+yTOv5PCx7lDDv22rZ6nThdKOjeCE= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:225f:7216:6111:7f1c]) (user=pgonda job=sendgmr) by 2002:a62:7bd5:0:b0:44c:72f5:5da4 with SMTP id w204-20020a627bd5000000b0044c72f55da4mr6904551pfc.48.1633443253802; Tue, 05 Oct 2021 07:14:13 -0700 (PDT) Date: Tue, 5 Oct 2021 07:13:57 -0700 In-Reply-To: <20211005141357.2393627-1-pgonda@google.com> Message-Id: <20211005141357.2393627-5-pgonda@google.com> Mime-Version: 1.0 References: <20211005141357.2393627-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH 4/4 V9] selftest: KVM: Add intra host migration tests From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , David Rientjes , Brijesh Singh , Joerg Roedel , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds testcases for intra host migration for SEV and SEV-ES. Also adds locking test to confirm no deadlock exists. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Reviewed-by: Marc Orr Cc: Marc Orr Cc: Sean Christopherson Cc: David Rientjes Cc: Brijesh Singh Cc: Joerg Roedel Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++ 2 files changed, 204 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c103873531e0..44fd3566fb51 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests TEST_GEN_PROGS_x86_64 += access_tracking_perf_test TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c new file mode 100644 index 000000000000..ec3bbc96e73a --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c @@ -0,0 +1,203 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "kselftest.h" +#include "../lib/kvm_util_internal.h" + +#define SEV_POLICY_ES 0b100 + +#define NR_MIGRATE_TEST_VCPUS 4 +#define NR_MIGRATE_TEST_VMS 3 +#define NR_LOCK_TESTING_THREADS 3 +#define NR_LOCK_TESTING_ITERATIONS 10000 + +static void sev_ioctl(int vm_fd, int cmd_id, void *data) +{ + struct kvm_sev_cmd cmd = { + .id = cmd_id, + .data = (uint64_t)data, + .sev_fd = open_sev_dev_path_or_exit(), + }; + int ret; + + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd); + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS), + "%d failed: return code: %d, errno: %d, fw error: %d", + cmd_id, ret, errno, cmd.error); +} + +static struct kvm_vm *sev_vm_create(bool es) +{ + struct kvm_vm *vm; + struct kvm_sev_launch_start start = { 0 }; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + if (es) + start.policy |= SEV_POLICY_ES; + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start); + if (es) + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL); + return vm; +} + +static struct kvm_vm *__vm_create(void) +{ + struct kvm_vm *vm; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) + vm_vcpu_add(vm, i); + + return vm; +} + +static int __sev_migrate_from(int dst_fd, int src_fd) +{ + struct kvm_enable_cap cap = { + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM, + .args = { src_fd } + }; + + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap); +} + + +static void sev_migrate_from(int dst_fd, int src_fd) +{ + int ret; + + ret = __sev_migrate_from(dst_fd, src_fd); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); +} + +static void test_sev_migrate_from(bool es) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS]; + int i; + + src_vm = sev_vm_create(es); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + dst_vms[i] = __vm_create(); + + /* Initial migration from the src to the first dst. */ + sev_migrate_from(dst_vms[0]->fd, src_vm->fd); + + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++) + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd); + + /* Migrate the guest back to the original VM. */ + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); + + kvm_vm_free(src_vm); + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) + kvm_vm_free(dst_vms[i]); +} + +struct locking_thread_input { + struct kvm_vm *vm; + int source_fds[NR_LOCK_TESTING_THREADS]; +}; + +static void *locking_test_thread(void *arg) +{ + int i, j; + struct locking_thread_input *input = (struct locking_test_thread *)arg; + + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) { + j = i % NR_LOCK_TESTING_THREADS; + __sev_migrate_from(input->vm->fd, input->source_fds[j]); + } + + return NULL; +} + +static void test_sev_migrate_locking(void) +{ + struct locking_thread_input input[NR_LOCK_TESTING_THREADS]; + pthread_t pt[NR_LOCK_TESTING_THREADS]; + int i; + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) { + input[i].vm = sev_vm_create(/* es= */ false); + input[0].source_fds[i] = input[i].vm->fd; + } + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i) + memcpy(input[i].source_fds, input[0].source_fds, + sizeof(input[i].source_fds)); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]); + + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) + pthread_join(pt[i], NULL); +} + +static void test_sev_migrate_parameters(void) +{ + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev, + *sev_es_vm_no_vmsa; + int ret; + + sev_vm = sev_vm_create(/* es= */ false); + sev_es_vm = sev_vm_create(/* es= */ true); + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + vm_no_sev = __vm_create(); + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL); + vm_vcpu_add(sev_es_vm_no_vmsa, 1); + + + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd); + TEST_ASSERT( + ret == -1 && errno == EINVAL, + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n", + ret, errno); + + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd); + TEST_ASSERT(ret == -1 && errno == EINVAL, + "Migrations require SEV enabled. ret %d, errno: %d\n", ret, + errno); +} + +int main(int argc, char *argv[]) +{ + test_sev_migrate_from(/* es= */ false); + test_sev_migrate_from(/* es= */ true); + test_sev_migrate_locking(); + test_sev_migrate_parameters(); + return 0; +}