From patchwork Mon Aug 30 20:57:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12465895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1177C43214 for ; Mon, 30 Aug 2021 20:57:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B13CC60F6C for ; Mon, 30 Aug 2021 20:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235924AbhH3U6T (ORCPT ); Mon, 30 Aug 2021 16:58:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233969AbhH3U6R (ORCPT ); Mon, 30 Aug 2021 16:58:17 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64801C06175F for ; Mon, 30 Aug 2021 13:57:23 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id h143-20020a25d095000000b0059c2e43cd3eso3674805ybg.12 for ; Mon, 30 Aug 2021 13:57:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zv67TwJUDsKSGkltZiACEJYFFnaxcP/3cm1t3N7XkTs=; b=SC8zjOGsJGsijvl2wxXNAPSRkicXFGnvhdUG78ANJsQ/eyMqIcmXCdpGiKUERSmwu0 jUVkz8qEnpvhHOZPfocLS6m6h6LHtBWzH5fn2pbu939Ghz3LejSCIf54fufRIUndkiqN 7xPRQuOV/QzwRavrKh/eqyst5d2muPVZZTXXJRhuB4QuLX3hpjcXlWREu+0GhupjC90a uCW/xeY44+h7Eck5sfA+2iGb8+j0B23k0/ODF7C3clB4sfMWcE+uurEcYot56LxjCX5/ 1EKb8Moj1mMFmrzxgdRTtz2c/NAzCS0cV1ur2Z5Ku12GNsNgmlg80+UO6s59GzT2wBXX wWdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zv67TwJUDsKSGkltZiACEJYFFnaxcP/3cm1t3N7XkTs=; b=twREkDJrQrISpXDOPtKiev3edUDgnnOuhB8Kqk/QE8qYp+vToIRqqv4VW+yXZ4e+Pm MkaIzpqkLQjbdxv8NHLSx2+e6j14Vove6C7v88137X11tx505RaSHqc7YJ1yoyEkrof/ OEblz0aOI4qGTx4GK6CrVsaerhqQ6lTJL6mSGAJXYTC5AVuIlQE8hfG8xBfqnuv/85Hq nthTyAGS3uZ5qlqanVuiUg6LPP49GYh1dpNRVA16gOPOJkh1wd5ZE4lKok0T7SWHHCn2 D95AGTVYBrzeQjFwZgbOzMgP4Ys245Vf+t6YJgAz8mwaNmsUP+d507WqPHBjnAzmu0qG xfdw== X-Gm-Message-State: AOAM530yPRN3EXwPH3cz3m2mmyqfDsLxa/Yxp7ZwPzTQ4jmoctqEWSxM 60LKotrki09ZkvdX6CsIoANGIJlIYQrkwzILtCl3ArsXFwJOKRCfubZFSWmQ/Yo6coZVh6YGL4X wAngM+a8Ou8bKLOC2aWzPMwV9dJn3eX8UKVy/VVDgeCvymzbcFw8PgUsHaQ== X-Google-Smtp-Source: ABdhPJyhd+M03HoxuRL8v9IhV3s9PaYewU6uwFgKvuO33yy410Q+AjQqs1pjDJOnEM/ghH07ei9oNSRFzpo= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:e552:6d5e:b69d:968c]) (user=pgonda job=sendgmr) by 2002:a25:9201:: with SMTP id b1mr25480528ybo.354.1630357042543; Mon, 30 Aug 2021 13:57:22 -0700 (PDT) Date: Mon, 30 Aug 2021 13:57:15 -0700 In-Reply-To: <20210830205717.3530483-1-pgonda@google.com> Message-Id: <20210830205717.3530483-2-pgonda@google.com> Mime-Version: 1.0 References: <20210830205717.3530483-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH 1/3 V6] KVM, SEV: Add support for SEV intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV to work with intra host migration, contents of the SEV info struct such as the ASID (used to index the encryption key in the AMD SP) and the list of memory regions need to be transferred to the target VM. This change adds a commands for a target VMM to get a source SEV VM's sev info. The target is expected to be initialized (sev_guest_init), but not launched state (sev_launch_start) when performing receive. Once the target has received, it will be in a launched state and will not need to perform the typical SEV launch commands. Signed-off-by: Peter Gonda Suggested-by: Sean Christopherson Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Marc Orr --- Documentation/virt/kvm/api.rst | 15 +++++ arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 99 +++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 2 + arch/x86/kvm/x86.c | 5 ++ include/uapi/linux/kvm.h | 1 + 7 files changed, 124 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 86d7ad3a126c..9dc56778b421 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6701,6 +6701,21 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_VM_MIGRATE_ENC_CONTEXT_FROM +------------------------------------- + +Architectures: x86 SEV enabled +Type: vm +Parameters: args[0] is the fd of the source vm +Returns: 0 on success + +This capability enables userspace to migrate the encryption context from the vm +indicated by the fd to the vm this is called on. + +This is intended to support intra-host migration of VMs between userspace VMMs. +in-guest workloads scheduled by the host. This allows for upgrading the VMM +process without interrupting the guest. + 8. Other capabilities. ====================== diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 20daaf67a5bf..fd3a118c9e40 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1448,6 +1448,7 @@ struct kvm_x86_ops { int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); + int (*vm_migrate_enc_context_from)(struct kvm *kvm, unsigned int source_fd); int (*get_msr_feature)(struct kvm_msr_entry *entry); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 46eb1ba62d3d..063cf26528bc 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1501,6 +1501,105 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp) return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error); } +static int svm_sev_lock_for_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + /* + * Bail if this VM is already involved in a migration to avoid deadlock + * between two VMs trying to migrate to/from each other. + */ + if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) + return -EBUSY; + + mutex_lock(&kvm->lock); + + return 0; +} + +static void svm_unlock_after_migration(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + mutex_unlock(&kvm->lock); + atomic_set_release(&sev->migration_in_progress, 0); +} + +static void migrate_info_from(struct kvm_sev_info *dst, + struct kvm_sev_info *src) +{ + sev_asid_free(dst); + + dst->asid = src->asid; + dst->misc_cg = src->misc_cg; + dst->handle = src->handle; + dst->pages_locked = src->pages_locked; + + src->asid = 0; + src->active = false; + src->handle = 0; + src->pages_locked = 0; + src->misc_cg = NULL; + + INIT_LIST_HEAD(&dst->regions_list); + list_replace_init(&src->regions_list, &dst->regions_list); +} + +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) +{ + struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; + struct file *source_kvm_file; + struct kvm *source_kvm; + int ret; + + ret = svm_sev_lock_for_migration(kvm); + if (ret) + return ret; + + if (!sev_guest(kvm) || sev_es_guest(kvm)) { + ret = -EINVAL; + pr_warn_ratelimited("VM must be SEV enabled to migrate to.\n"); + goto out_unlock; + } + + if (!list_empty(&dst_sev->regions_list)) { + ret = -EINVAL; + pr_warn_ratelimited( + "VM must not have encrypted regions to migrate to.\n"); + goto out_unlock; + } + + source_kvm_file = fget(source_fd); + if (!file_is_kvm(source_kvm_file)) { + ret = -EBADF; + goto out_fput; + } + + source_kvm = source_kvm_file->private_data; + ret = svm_sev_lock_for_migration(source_kvm); + if (ret) + goto out_fput; + + if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + ret = -EINVAL; + pr_warn_ratelimited( + "Source VM must be SEV enabled to migrate from.\n"); + goto out_source; + } + + migrate_info_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); + ret = 0; + +out_source: + svm_unlock_after_migration(source_kvm); +out_fput: + if (source_kvm_file) + fput(source_kvm_file); +out_unlock: + svm_unlock_after_migration(kvm); + return ret; +} + int svm_mem_enc_op(struct kvm *kvm, void __user *argp) { struct kvm_sev_cmd sev_cmd; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7b58e445a967..8b5bcab48937 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4627,6 +4627,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .mem_enc_unreg_region = svm_unregister_enc_region, .vm_copy_enc_context_from = svm_vm_copy_asid_from, + .vm_migrate_enc_context_from = svm_vm_migrate_from, .can_emulate_instruction = svm_can_emulate_instruction, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 524d943f3efc..67bfb43301e1 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -80,6 +80,7 @@ struct kvm_sev_info { u64 ap_jump_table; /* SEV-ES AP Jump Table address */ struct kvm *enc_context_owner; /* Owner of copied encryption context */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ + atomic_t migration_in_progress; }; struct kvm_svm { @@ -552,6 +553,7 @@ int svm_register_enc_region(struct kvm *kvm, int svm_unregister_enc_region(struct kvm *kvm, struct kvm_enc_region *range); int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd); +int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd); void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_set_cpu_caps(void); void __init sev_hardware_setup(void); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fdc0c18339fb..ea3100134e35 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5655,6 +5655,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (kvm_x86_ops.vm_copy_enc_context_from) r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]); return r; + case KVM_CAP_VM_MIGRATE_ENC_CONTEXT_FROM: + r = -EINVAL; + if (kvm_x86_ops.vm_migrate_enc_context_from) + r = kvm_x86_ops.vm_migrate_enc_context_from(kvm, cap->args[0]); + return r; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { r = -EINVAL; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index a067410ebea5..49660204cdb9 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1112,6 +1112,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_VM_MIGRATE_ENC_CONTEXT_FROM 206 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Mon Aug 30 20:57:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12465899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C9E6C4320A for ; Mon, 30 Aug 2021 20:57:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84FB061004 for ; Mon, 30 Aug 2021 20:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236147AbhH3U6W (ORCPT ); Mon, 30 Aug 2021 16:58:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236019AbhH3U6T (ORCPT ); Mon, 30 Aug 2021 16:58:19 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 137B2C06175F for ; Mon, 30 Aug 2021 13:57:25 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id e8-20020a05622a110800b0029ecbdc1b2aso1344978qty.12 for ; Mon, 30 Aug 2021 13:57:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ijncrk8rUqsxi+KMQZrb7TMOV/czYWoHZiWYp38B42A=; b=diWrc1cigzn6KVGYpj15+fe4oHAsHOaDOHR/zD7GPnsOB2PLhom2HDOytth2iYFuSr 6VEK6hihUN4yvm33+0HZLSL0blWmak52MNeuyx5ECn8mwGrczChNnaz3lKmks2UBHwjR 4nvwgbXpFiSZjGGvRbo0/BwyJ7eOVSTHO/NXXS7aAJcEaeIJckfuz2Pjeb0atSDQHcAd sTG/HUlH+PYCPzK7bgFTfWbkCP7EQmgpuni+C5OeN0/zeIl1FpHfbJfi5WMzp3orqAbO /KUHfE93fPMbYDSUJMWkgj/5pxptUh7a3ZCNpztbMYwzwNVjXza3Ck6IfY+xSy+JEluT ITbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ijncrk8rUqsxi+KMQZrb7TMOV/czYWoHZiWYp38B42A=; b=qI9jnYzF6rCJRFURhLScQPsHtGucVUvyAE5rrs2314/4be2lL1ZYegbZhpGEIyjR54 So1pBP0rNHiMgpIBK12o0vjAjpKxN+KEOJlTN9/Az7fa8/ICooogg3GLOIyALg0cgqrv NmZmLmH2zULhAaBAeoLc/dYSxqwLSV7KyHI0+ratu5KznNPeSPWzBlfuLmXiLumcgsQx dX3g5e4SNTNh+aX7zM7NJCppSVTqDipsWZi9ngJuIahwQejMME3/DFo+zh8Jau9OjfTD fTirDVXj3Yjpd7zZ/Be2Rv9gn843tvyc929GjwWGb6fZcNr2TYv1oyqEYCKJHZkbqGKd QbPg== X-Gm-Message-State: AOAM532cyGEnfZtN5w+KzZ3TDZIKPIvX4InwpMDwhRA+s9fpWIkfEvvN ef75FlANzHZRCvUAIWO9wTQYvdNmvHC59W7LbwG89FppBQZ+5rB5Z2C2+2FWrW8cXCGsfOqLeaP +zteY5QEnqmgjCqp5mZG9ogxaFX47VDaf91TOBJnPqjjoWNjTTvL89sq9MA== X-Google-Smtp-Source: ABdhPJwr/qupM/KV1lTNths63RBEx3WCz/61pTHw80lU811E3Ft+4wl8Tv63z8MNbNpXBp8llDs0t6oM1is= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:e552:6d5e:b69d:968c]) (user=pgonda job=sendgmr) by 2002:a0c:bece:: with SMTP id f14mr25050382qvj.25.1630357044119; Mon, 30 Aug 2021 13:57:24 -0700 (PDT) Date: Mon, 30 Aug 2021 13:57:16 -0700 In-Reply-To: <20210830205717.3530483-1-pgonda@google.com> Message-Id: <20210830205717.3530483-3-pgonda@google.com> Mime-Version: 1.0 References: <20210830205717.3530483-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH 2/3 V6] KVM, SEV: Add support for SEV-ES intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , Marc Orr , Paolo Bonzini , Sean Christopherson , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For SEV-ES to work with intra host migration the VMSAs, GHCB metadata, and other SEV-ES info needs to be preserved along with the guest's memory. Signed-off-by: Peter Gonda Cc: Marc Orr Cc: Paolo Bonzini Cc: Sean Christopherson Cc: David Rientjes Cc: Dr. David Alan Gilbert Cc: Brijesh Singh Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Marc Orr --- arch/x86/kvm/svm/sev.c | 62 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 60 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 063cf26528bc..3324eed1a39e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1545,6 +1545,59 @@ static void migrate_info_from(struct kvm_sev_info *dst, list_replace_init(&src->regions_list, &dst->regions_list); } +static int migrate_vmsa_from(struct kvm *dst, struct kvm *src) +{ + int i, num_vcpus; + struct kvm_vcpu *dst_vcpu, *src_vcpu; + struct vcpu_svm *dst_svm, *src_svm; + + num_vcpus = atomic_read(&dst->online_vcpus); + if (num_vcpus != atomic_read(&src->online_vcpus)) { + pr_warn_ratelimited( + "Source and target VMs must have same number of vCPUs.\n"); + return -EINVAL; + } + + for (i = 0; i < num_vcpus; ++i) { + src_vcpu = src->vcpus[i]; + if (!src_vcpu->arch.guest_state_protected) { + pr_warn_ratelimited( + "Source ES VM vCPUs must have protected state.\n"); + return -EINVAL; + } + } + + for (i = 0; i < num_vcpus; ++i) { + src_vcpu = src->vcpus[i]; + src_svm = to_svm(src_vcpu); + dst_vcpu = dst->vcpus[i]; + dst_svm = to_svm(dst_vcpu); + + /* + * Copy VMSA and GHCB fields from the source to the destination. + * Clear them on the source to prevent the VM running and + * changing the state of the VMSA/GHCB unexpectedly. + */ + dst_vcpu->vcpu_id = src_vcpu->vcpu_id; + dst_svm->vmsa = src_svm->vmsa; + src_svm->vmsa = NULL; + dst_svm->ghcb = src_svm->ghcb; + src_svm->ghcb = NULL; + dst_svm->vmcb->control.ghcb_gpa = + src_svm->vmcb->control.ghcb_gpa; + src_svm->vmcb->control.ghcb_gpa = 0; + dst_svm->ghcb_sa = src_svm->ghcb_sa; + src_svm->ghcb_sa = NULL; + dst_svm->ghcb_sa_len = src_svm->ghcb_sa_len; + src_svm->ghcb_sa_len = 0; + dst_svm->ghcb_sa_sync = src_svm->ghcb_sa_sync; + src_svm->ghcb_sa_sync = false; + dst_svm->ghcb_sa_free = src_svm->ghcb_sa_free; + src_svm->ghcb_sa_free = false; + } + return 0; +} + int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) { struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info; @@ -1556,7 +1609,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) return ret; - if (!sev_guest(kvm) || sev_es_guest(kvm)) { + if (!sev_guest(kvm)) { ret = -EINVAL; pr_warn_ratelimited("VM must be SEV enabled to migrate to.\n"); goto out_unlock; @@ -1580,13 +1633,18 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd) if (ret) goto out_fput; - if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) { + if (!sev_guest(source_kvm)) { ret = -EINVAL; pr_warn_ratelimited( "Source VM must be SEV enabled to migrate from.\n"); goto out_source; } + if (sev_es_guest(kvm)) { + ret = migrate_vmsa_from(kvm, source_kvm); + if (ret) + goto out_source; + } migrate_info_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info); ret = 0; From patchwork Mon Aug 30 20:57:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 12465897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 762A5C43214 for ; Mon, 30 Aug 2021 20:57:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E6D160F5B for ; Mon, 30 Aug 2021 20:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235968AbhH3U6Y (ORCPT ); Mon, 30 Aug 2021 16:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235846AbhH3U6U (ORCPT ); Mon, 30 Aug 2021 16:58:20 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 760BDC061764 for ; Mon, 30 Aug 2021 13:57:26 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id b9-20020a5b07890000b0290558245b7eabso8530530ybq.10 for ; Mon, 30 Aug 2021 13:57:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XnX1RRTYlpNFCaeJ5FVd4ogAgvJWBNrKsE3lEqNCRUs=; b=hZcPaMjlZXhuJ2KjhtRHlGd1qP0hgyCBZQ0YxazNjI41dD60YsbjGLAHZVQR+XZUIp uxkrhcdARfHk94sXMUjL/i1iuuQEIWfQ47M6U064udi1+4qfkbEqm5OXb2dyhLlq+Srk QMdb1FAJWNLhtvWx9175XxYdyIzH863UWDGiRTIGmGwCSXOclu9O1hVKGLSmHahvUKlu nb+JckZ78raFp1GP34aLFGTLGkRPQ0UqkDZYiNqtGwUurPvZLjFFHp9TpGFOMpR8yFvf LYVCweJuXvgBtfUeq+oN4/9wiryERN9JDSvir/eNiF1xaPwi3iwNu7f1KNAPR7kxPvS1 PeaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XnX1RRTYlpNFCaeJ5FVd4ogAgvJWBNrKsE3lEqNCRUs=; b=aZyy4SZ0SBkBs+9FwxmnaoJFfGLZNmR+KYJkjQpCVm+7IiBMl6TNrtbhDrgO1I4OzD qLEi3Gy5+NRYPu7AqArMQf1wd/9wjUgtgVveduFB1o8PAVxTc3VzexTNxHV7Bzfw8slV QCtaNndg+gzGNCV1jpjvyJfhtVSLuUuN1VDRKUCPUJL+H1w4xekkZfjHpTAAS/+8URj5 5TNdBdgH+0yuFfuMepLVfdmL3RnzA3SNg5+bB53FDrsaq1E96+Kk46jE5+XkE/yX/s9X 4l9XxgtA3fEUw0iLZ/w5jXH2ewpXxEYpoUO1NrDADbp0k790MJ0RMChzrRRbVp7XXeDK wzEg== X-Gm-Message-State: AOAM5304ZpxvhekHFXJ5rH484WCA12FB0Q7QBLpWdbW5eEIEhb68X4e9 gpq1CM3dYHWFihplMUhcijIEjCOv9bCrAdo/2ov9EPEKUQIwE2dgSTCQnFj5EgOdTO5ULvk1BT0 KBcId9Yudx/CuhxCRT1IJKeFmUDnvi/3W3ymyXlg26Y+Y8mil/AYPqiWbQw== X-Google-Smtp-Source: ABdhPJxJ+1mZc5XXrIfkAVvN6Ti+jU8EVPsZO4GbvjFKOrX69rGrtr2gCuiuxSayV4aIzwOViKScrQSuH/U= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:204:e552:6d5e:b69d:968c]) (user=pgonda job=sendgmr) by 2002:a25:ea52:: with SMTP id o18mr28100655ybe.150.1630357045610; Mon, 30 Aug 2021 13:57:25 -0700 (PDT) Date: Mon, 30 Aug 2021 13:57:17 -0700 In-Reply-To: <20210830205717.3530483-1-pgonda@google.com> Message-Id: <20210830205717.3530483-4-pgonda@google.com> Mime-Version: 1.0 References: <20210830205717.3530483-1-pgonda@google.com> X-Mailer: git-send-email 2.33.0.259.gc128427fd7-goog Subject: [PATCH 3/3 V6] selftest: KVM: Add intra host migration From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds testcases for intra host migration for SEV and SEV-ES. Also adds locking test to confirm no deadlock exists. Reported-by: kernel test robot --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/sev_vm_tests.c | 152 ++++++++++++++++++ 2 files changed, 153 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 5832f510a16c..de6e64d5c9c4 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -71,6 +71,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests TEST_GEN_PROGS_x86_64 += access_tracking_perf_test TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c new file mode 100644 index 000000000000..50a770316628 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c @@ -0,0 +1,150 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "kvm_util.h" +#include "kselftest.h" +#include "../lib/kvm_util_internal.h" + +#define SEV_DEV_PATH "/dev/sev" + +/* + * Open SEV_DEV_PATH if available, otherwise exit the entire program. + * + * Input Args: + * flags - The flags to pass when opening SEV_DEV_PATH. + * + * Return: + * The opened file descriptor of /dev/sev. + */ +static int open_sev_dev_path_or_exit(int flags) +{ + static int fd; + + if (fd != 0) + return fd; + + fd = open(SEV_DEV_PATH, flags); + if (fd < 0) { + print_skip("%s not available, is SEV not enabled? (errno: %d)", + SEV_DEV_PATH, errno); + exit(KSFT_SKIP); + } + + return fd; +} + +static void sev_ioctl(int fd, int cmd_id, void *data) +{ + struct kvm_sev_cmd cmd = { 0 }; + int ret; + + TEST_ASSERT(cmd_id < KVM_SEV_NR_MAX, "Unknown SEV CMD : %d\n", cmd_id); + + cmd.id = cmd_id; + cmd.sev_fd = open_sev_dev_path_or_exit(0); + cmd.data = (uint64_t)data; + ret = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &cmd); + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS), + "%d failed: return code: %d, errno: %d, fw error: %d", + cmd_id, ret, errno, cmd.error); +} + +static struct kvm_vm *sev_vm_create(bool es) +{ + struct kvm_vm *vm; + struct kvm_sev_launch_start start = { 0 }; + int i; + + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL); + for (i = 0; i < 3; ++i) + vm_vcpu_add(vm, i); + start.policy |= (es) << 2; + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start); + if (es) + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL); + return vm; +} + +static void test_sev_migrate_from(bool es) +{ + struct kvm_vm *vms[3]; + struct kvm_enable_cap cap = { 0 }; + int i; + + for (i = 0; i < sizeof(vms) / sizeof(struct kvm_vm *); ++i) + vms[i] = sev_vm_create(es); + + cap.cap = KVM_CAP_VM_MIGRATE_ENC_CONTEXT_FROM; + for (i = 0; i < sizeof(vms) / sizeof(struct kvm_vm *) - 1; ++i) { + cap.args[0] = vms[i]->fd; + vm_enable_cap(vms[i + 1], &cap); + } +} + +#define LOCK_TESTING_THREADS 3 + +struct locking_thread_input { + struct kvm_vm *vm; + int source_fds[LOCK_TESTING_THREADS]; +}; + +static void *locking_test_thread(void *arg) +{ + struct kvm_enable_cap cap = { 0 }; + int i, j; + struct locking_thread_input *input = (struct locking_test_thread *)arg; + + cap.cap = KVM_CAP_VM_MIGRATE_ENC_CONTEXT_FROM; + + for (i = 0; i < 1000; ++i) { + j = input->source_fds[i % LOCK_TESTING_THREADS]; + cap.args[0] = input->source_fds[j]; + /* + * Call IOCTL directly without checking return code. We are + * simply trying to confirm there is no deadlock from userspace + * not check correctness of migration here. + */ + ioctl(input->vm->fd, KVM_ENABLE_CAP, &cap); + } +} + +static void test_sev_migrate_locking(void) +{ + struct locking_thread_input input[LOCK_TESTING_THREADS]; + pthread_t pt[LOCK_TESTING_THREADS]; + int i; + + for (i = 0; i < LOCK_TESTING_THREADS; ++i) { + input[i].vm = sev_vm_create(/* es= */ false); + input[0].source_fds[i] = input[i].vm->fd; + } + memcpy(input[1].source_fds, input[0].source_fds, + sizeof(input[1].source_fds)); + memcpy(input[2].source_fds, input[0].source_fds, + sizeof(input[2].source_fds)); + + for (i = 0; i < LOCK_TESTING_THREADS; ++i) + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]); + + for (i = 0; i < LOCK_TESTING_THREADS; ++i) + pthread_join(pt[i], NULL); +} + +int main(int argc, char *argv[]) +{ + test_sev_migrate_from(/* es= */ false); + test_sev_migrate_from(/* es= */ true); + test_sev_migrate_locking(); + return 0; +}