From patchwork Wed Jan 13 14:37:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E725C433E6 for ; Wed, 13 Jan 2021 14:38:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AB692343B for ; Wed, 13 Jan 2021 14:38:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726426AbhAMOi5 (ORCPT ); Wed, 13 Jan 2021 09:38:57 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:38891 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725771AbhAMOi4 (ORCPT ); Wed, 13 Jan 2021 09:38:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5zMyeAIJV4DWJz8SQHo1S/srnhlPuLS/oU/oJkunMYQ=; b=LyeaCS9WE3s98pQj2TXMgElE6RK52m4Zx09vm29Wq1CGk1wpULFYBx1bRMq9MTQBhWI6/X 0Sy864bmnenwR7fEKbBQKAw2lM9xGeZGt7iHivABHehZHJQXZausJAKSJF/2NO5HrUFYTP cVLc78xKXJ5VuY2mi2m+afUtYh8PRFc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-322-EHc_DoUMMDqgSPN9Vn4L0Q-1; Wed, 13 Jan 2021 09:37:28 -0500 X-MC-Unique: EHc_DoUMMDqgSPN9Vn4L0Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C3E6D80A5C0; Wed, 13 Jan 2021 14:37:26 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2AAFE5C3E0; Wed, 13 Jan 2021 14:37:24 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 1/7] selftests: kvm: Move kvm_get_supported_hv_cpuid() to common code Date: Wed, 13 Jan 2021 15:37:15 +0100 Message-Id: <20210113143721.328594-2-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_get_supported_hv_cpuid() may come handy in all Hyper-V related tests. Split it off hyperv_cpuid test, create system-wide and vcpu versions. Signed-off-by: Vitaly Kuznetsov --- .../selftests/kvm/include/x86_64/processor.h | 3 ++ .../selftests/kvm/lib/x86_64/processor.c | 33 +++++++++++++++++++ .../selftests/kvm/x86_64/hyperv_cpuid.c | 31 ++--------------- 3 files changed, 39 insertions(+), 28 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 90cd5984751b..53c634a462f2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -391,6 +391,9 @@ bool set_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *ent); uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, uint64_t a3); +struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void); +struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); + /* * Basic CPU control in CR0 */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 95e1a757c629..efb540c90732 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1224,3 +1224,36 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, : "b"(a0), "c"(a1), "d"(a2), "S"(a3)); return r; } + +struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(void) +{ + static struct kvm_cpuid2 *cpuid; + int ret; + int kvm_fd; + + if (cpuid) + return cpuid; + + cpuid = allocate_kvm_cpuid2(); + kvm_fd = open(KVM_DEV_PATH, O_RDONLY); + if (kvm_fd < 0) + exit(KSFT_SKIP); + + ret = ioctl(kvm_fd, KVM_GET_SUPPORTED_HV_CPUID, cpuid); + TEST_ASSERT(ret == 0, "KVM_GET_SUPPORTED_HV_CPUID failed %d %d\n", + ret, errno); + + close(kvm_fd); + return cpuid; +} + +struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid) +{ + static struct kvm_cpuid2 *cpuid; + + cpuid = allocate_kvm_cpuid2(); + + vcpu_ioctl(vm, vcpuid, KVM_GET_SUPPORTED_HV_CPUID, cpuid); + + return cpuid; +} diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c index 88a595b7fbdd..7e2d2d17d2ed 100644 --- a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c +++ b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c @@ -125,30 +125,6 @@ void test_hv_cpuid_e2big(struct kvm_vm *vm, bool system) " it should have: %d %d", system ? "KVM" : "vCPU", ret, errno); } - -struct kvm_cpuid2 *kvm_get_supported_hv_cpuid(struct kvm_vm *vm, bool system) -{ - int nent = 20; /* should be enough */ - static struct kvm_cpuid2 *cpuid; - - cpuid = malloc(sizeof(*cpuid) + nent * sizeof(struct kvm_cpuid_entry2)); - - if (!cpuid) { - perror("malloc"); - abort(); - } - - cpuid->nent = nent; - - if (!system) - vcpu_ioctl(vm, VCPU_ID, KVM_GET_SUPPORTED_HV_CPUID, cpuid); - else - kvm_ioctl(vm, KVM_GET_SUPPORTED_HV_CPUID, cpuid); - - return cpuid; -} - - int main(int argc, char *argv[]) { struct kvm_vm *vm; @@ -167,7 +143,7 @@ int main(int argc, char *argv[]) /* Test vCPU ioctl version */ test_hv_cpuid_e2big(vm, false); - hv_cpuid_entries = kvm_get_supported_hv_cpuid(vm, false); + hv_cpuid_entries = vcpu_get_supported_hv_cpuid(vm, VCPU_ID); test_hv_cpuid(hv_cpuid_entries, false); free(hv_cpuid_entries); @@ -177,7 +153,7 @@ int main(int argc, char *argv[]) goto do_sys; } vcpu_enable_evmcs(vm, VCPU_ID); - hv_cpuid_entries = kvm_get_supported_hv_cpuid(vm, false); + hv_cpuid_entries = vcpu_get_supported_hv_cpuid(vm, VCPU_ID); test_hv_cpuid(hv_cpuid_entries, true); free(hv_cpuid_entries); @@ -190,9 +166,8 @@ int main(int argc, char *argv[]) test_hv_cpuid_e2big(vm, true); - hv_cpuid_entries = kvm_get_supported_hv_cpuid(vm, true); + hv_cpuid_entries = kvm_get_supported_hv_cpuid(); test_hv_cpuid(hv_cpuid_entries, nested_vmx_supported()); - free(hv_cpuid_entries); out: kvm_vm_free(vm); From patchwork Wed Jan 13 14:37:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 022F3C433E0 for ; Wed, 13 Jan 2021 14:39:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE1402343B for ; Wed, 13 Jan 2021 14:39:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726594AbhAMOjA (ORCPT ); Wed, 13 Jan 2021 09:39:00 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:53454 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726505AbhAMOjA (ORCPT ); Wed, 13 Jan 2021 09:39:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nyo6Ug6CBUUsr9Np8NmoohbF5XlqoIrdKIOLBF57t1Y=; b=Q1uxZ9XW8jWdAJgOyE4iA265fGyyjOZEA/OoQzk1EXWnz6BzaVVLzz5jm3eeS1Hjr8lfz8 PEJ1Q7VGuskdvMpgCd0yFRiFovdwswcih50c2T/qwNiZ935TWMJNaEHKdP77h2x/S+/Lgf SzQ23tCWE+YZpZqPHUoFogsZuePyllY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-109-OgXFREiRNWmKagnxf2Hafw-1; Wed, 13 Jan 2021 09:37:30 -0500 X-MC-Unique: OgXFREiRNWmKagnxf2Hafw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0D6F2802B4C; Wed, 13 Jan 2021 14:37:29 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3ACBC5C3E0; Wed, 13 Jan 2021 14:37:27 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 2/7] selftests: kvm: Properly set Hyper-V CPUIDs in evmcs_test Date: Wed, 13 Jan 2021 15:37:16 +0100 Message-Id: <20210113143721.328594-3-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Generally, when Hyper-V emulation is enabled, VMM is supposed to set Hyper-V CPUID identifications so the guest knows that Hyper-V features are available. evmcs_test doesn't currently do that but so far Hyper-V emulation in KVM was enabled unconditionally. As we are about to change that, proper Hyper-V CPUID identification should be set in selftests as well. Signed-off-by: Vitaly Kuznetsov --- .../testing/selftests/kvm/x86_64/evmcs_test.c | 39 ++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/evmcs_test.c b/tools/testing/selftests/kvm/x86_64/evmcs_test.c index 37b8a78f6b74..39a3cb2bd103 100644 --- a/tools/testing/selftests/kvm/x86_64/evmcs_test.c +++ b/tools/testing/selftests/kvm/x86_64/evmcs_test.c @@ -78,6 +78,42 @@ void guest_code(struct vmx_pages *vmx_pages) GUEST_ASSERT(vmlaunch()); } +struct kvm_cpuid2 *guest_get_cpuid(void) +{ + static struct kvm_cpuid2 *cpuid_full; + struct kvm_cpuid2 *cpuid_sys, *cpuid_hv; + int i, nent = 0; + + if (cpuid_full) + return cpuid_full; + + cpuid_sys = kvm_get_supported_cpuid(); + cpuid_hv = kvm_get_supported_hv_cpuid(); + + cpuid_full = malloc(sizeof(*cpuid_full) + + (cpuid_sys->nent + cpuid_hv->nent) * + sizeof(struct kvm_cpuid_entry2)); + if (!cpuid_full) { + perror("malloc"); + abort(); + } + + /* Need to skip KVM CPUID leaves 0x400000xx */ + for (i = 0; i < cpuid_sys->nent; i++) { + if (cpuid_sys->entries[i].function >= 0x40000000 && + cpuid_sys->entries[i].function < 0x40000100) + continue; + cpuid_full->entries[nent] = cpuid_sys->entries[i]; + nent++; + } + + memcpy(&cpuid_full->entries[nent], cpuid_hv->entries, + cpuid_hv->nent * sizeof(struct kvm_cpuid_entry2)); + cpuid_full->nent = nent + cpuid_hv->nent; + + return cpuid_full; +} + int main(int argc, char *argv[]) { vm_vaddr_t vmx_pages_gva = 0; @@ -99,6 +135,7 @@ int main(int argc, char *argv[]) exit(KSFT_SKIP); } + vcpu_set_cpuid(vm, VCPU_ID, guest_get_cpuid()); vcpu_enable_evmcs(vm, VCPU_ID); run = vcpu_state(vm, VCPU_ID); @@ -142,7 +179,7 @@ int main(int argc, char *argv[]) /* Restore state in a new VM. */ kvm_vm_restart(vm, O_RDWR); vm_vcpu_add(vm, VCPU_ID); - vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); + vcpu_set_cpuid(vm, VCPU_ID, guest_get_cpuid()); vcpu_enable_evmcs(vm, VCPU_ID); vcpu_load_state(vm, VCPU_ID, state); run = vcpu_state(vm, VCPU_ID); From patchwork Wed Jan 13 14:37:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68EABC433DB for ; Wed, 13 Jan 2021 14:39:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D0BE2343B for ; Wed, 13 Jan 2021 14:39:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726841AbhAMOjB (ORCPT ); Wed, 13 Jan 2021 09:39:01 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:30688 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726492AbhAMOjA (ORCPT ); Wed, 13 Jan 2021 09:39:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DSWza1pLefjTPdtH59MZHMtC3byDyGWJLydnL5pSMfQ=; b=aO/hv0u8kIstZMtC/m+f2vk2Htm07rE9MdAe++4EQlrCSV42Y98ssRGK17l9O4weoAg0MJ PV9SPi+cbOD/Rm+EdbKJhdHy4kAs7WZrHmOgMxQSV4MSxSK2qFIvhrBc79GPhnGB6nPoO4 uYIm/UsTwc8ZgBj3fHV6Y1uKNQjB21I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-592-RtzqT6jMPUuwvOBDGpZoxg-1; Wed, 13 Jan 2021 09:37:32 -0500 X-MC-Unique: RtzqT6jMPUuwvOBDGpZoxg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0AC0A80A5C0; Wed, 13 Jan 2021 14:37:31 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 745F65C3E0; Wed, 13 Jan 2021 14:37:29 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 3/7] KVM: x86: hyper-v: Always use vcpu_to_hv_vcpu() accessor to get to 'struct kvm_vcpu_hv' Date: Wed, 13 Jan 2021 15:37:17 +0100 Message-Id: <20210113143721.328594-4-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As a preparation to allocating Hyper-V context dynamically, make it clear who's the user of the said context. No functional change intended. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 14 ++++++++------ arch/x86/kvm/hyperv.h | 4 +++- arch/x86/kvm/lapic.h | 6 +++++- arch/x86/kvm/vmx/vmx.c | 9 ++++++--- arch/x86/kvm/x86.c | 4 +++- 5 files changed, 25 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 922c69dcca4d..82f51346118f 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -190,7 +190,7 @@ static void kvm_hv_notify_acked_sint(struct kvm_vcpu *vcpu, u32 sint) static void synic_exit(struct kvm_vcpu_hv_synic *synic, u32 msr) { struct kvm_vcpu *vcpu = synic_to_vcpu(synic); - struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); hv_vcpu->exit.type = KVM_EXIT_HYPERV_SYNIC; hv_vcpu->exit.u.synic.msr = msr; @@ -294,7 +294,7 @@ static int kvm_hv_syndbg_complete_userspace(struct kvm_vcpu *vcpu) static void syndbg_exit(struct kvm_vcpu *vcpu, u32 msr) { struct kvm_hv_syndbg *syndbg = vcpu_to_hv_syndbg(vcpu); - struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); hv_vcpu->exit.type = KVM_EXIT_HYPERV_SYNDBG; hv_vcpu->exit.u.syndbg.msr = msr; @@ -840,7 +840,9 @@ void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu) bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu) { - if (!(vcpu->arch.hyperv.hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE)) + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + + if (!(hv_vcpu->hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE)) return false; return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED; } @@ -1216,7 +1218,7 @@ static u64 current_task_runtime_100ns(void) static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host) { - struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); switch (msr) { case HV_X64_MSR_VP_INDEX: { @@ -1379,7 +1381,7 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host) { u64 data = 0; - struct kvm_vcpu_hv *hv_vcpu = &vcpu->arch.hyperv; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); switch (msr) { case HV_X64_MSR_VP_INDEX: @@ -1494,7 +1496,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa, u16 rep_cnt, bool ex) { struct kvm *kvm = current_vcpu->kvm; - struct kvm_vcpu_hv *hv_vcpu = ¤t_vcpu->arch.hyperv; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(current_vcpu); struct hv_tlb_flush_ex flush_ex; struct hv_tlb_flush flush; u64 vp_bitmap[KVM_HV_MAX_SPARSE_VCPU_SET_BITS]; diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 6d7def2b0aad..6300038e7a52 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -114,7 +114,9 @@ static inline struct kvm_vcpu *stimer_to_vcpu(struct kvm_vcpu_hv_stimer *stimer) static inline bool kvm_hv_has_stimer_pending(struct kvm_vcpu *vcpu) { - return !bitmap_empty(vcpu->arch.hyperv.stimer_pending_bitmap, + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + + return !bitmap_empty(hv_vcpu->stimer_pending_bitmap, HV_SYNIC_STIMER_COUNT); } diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 4fb86e3a9dd3..dec7356f2fcd 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -6,6 +6,8 @@ #include +#include "hyperv.h" + #define KVM_APIC_INIT 0 #define KVM_APIC_SIPI 1 #define KVM_APIC_LVT_NUM 6 @@ -127,7 +129,9 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data); static inline bool kvm_hv_vapic_assist_page_enabled(struct kvm_vcpu *vcpu) { - return vcpu->arch.hyperv.hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + + return hv_vcpu->hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE; } int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 75c9c6a0a3a4..7fe09b69a465 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -50,6 +50,7 @@ #include "capabilities.h" #include "cpuid.h" #include "evmcs.h" +#include "hyperv.h" #include "irq.h" #include "kvm_cache_regs.h" #include "lapic.h" @@ -6732,12 +6733,14 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0); /* All fields are clean at this point */ - if (static_branch_unlikely(&enable_evmcs)) + if (static_branch_unlikely(&enable_evmcs)) { + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + current_evmcs->hv_clean_fields |= HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; - if (static_branch_unlikely(&enable_evmcs)) - current_evmcs->hv_vp_id = vcpu->arch.hyperv.vp_index; + current_evmcs->hv_vp_id = hv_vcpu->vp_index; + } /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ if (vmx->host_debugctlmsr) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3f7c1fc7a3ce..30fbbf53ff1e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8879,8 +8879,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } if (kvm_check_request(KVM_REQ_HV_EXIT, vcpu)) { + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + vcpu->run->exit_reason = KVM_EXIT_HYPERV; - vcpu->run->hyperv = vcpu->arch.hyperv.exit; + vcpu->run->hyperv = hv_vcpu->exit; r = 0; goto out; } From patchwork Wed Jan 13 14:37:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31840C433E0 for ; Wed, 13 Jan 2021 14:39:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E06682343B for ; Wed, 13 Jan 2021 14:39:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726902AbhAMOjD (ORCPT ); Wed, 13 Jan 2021 09:39:03 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29344 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726492AbhAMOjC (ORCPT ); Wed, 13 Jan 2021 09:39:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zFegbZ8VIySHLMToNbdDNoKbdi76V3o4oYA5DypWYlw=; b=Da35mXV/PUUXWF6+EUdfwXqePBgP2JREqPrk4g73hWJ4+lAZFFjYlo+KfwG2Qotkv+Ca5+ cR6jzLRWsPDIG0Yq/5VGqFDq9ApNuHzw93Ta+Ntwm6LSfSkLg8OnIuUxfrVAOKYSq9SUwf EJEBCA6j7EIDnM5pf4KD1AV5HgUzS7U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-217-edAREljXOv2fTcYG-QUMPA-1; Wed, 13 Jan 2021 09:37:34 -0500 X-MC-Unique: edAREljXOv2fTcYG-QUMPA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CA2B080A5C8; Wed, 13 Jan 2021 14:37:32 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 66D565C730; Wed, 13 Jan 2021 14:37:31 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 4/7] KVM: x86: hyper-v: Prepare to meet unallocated Hyper-V context Date: Wed, 13 Jan 2021 15:37:18 +0100 Message-Id: <20210113143721.328594-5-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, Hyper-V context is part of 'struct kvm_vcpu_arch' and is always available. As a preparation to allocating it dynamically, check that it is not NULL at call sites which can normally proceed without it i.e. the behavior is identical to the situation when Hyper-V emulation is not being used by the guest. When Hyper-V context for a particular vCPU is not allocated, we may still need to get 'vp_index' from there. E.g. in a hypothetical situation when Hyper-V emulation was enabled on one CPU and wasn't on another, Hyper-V style send-IPI hypercall may still be used. Luckily, vp_index is always initialized to kvm_vcpu_get_idx() and can only be changed when Hyper-V context is present. Introduce vcpu_to_hv_vpindex() helper for simplification. No functional change intended. Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 18 +++++++++++------- arch/x86/kvm/hyperv.h | 10 ++++++++++ arch/x86/kvm/lapic.c | 6 ++++-- arch/x86/kvm/vmx/vmx.c | 4 +--- arch/x86/kvm/x86.c | 7 +++++-- 5 files changed, 31 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 82f51346118f..77deaadb8575 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -141,10 +141,10 @@ static struct kvm_vcpu *get_vcpu_by_vpidx(struct kvm *kvm, u32 vpidx) return NULL; vcpu = kvm_get_vcpu(kvm, vpidx); - if (vcpu && vcpu_to_hv_vcpu(vcpu)->vp_index == vpidx) + if (vcpu && vcpu_to_hv_vpindex(vcpu) == vpidx) return vcpu; kvm_for_each_vcpu(i, vcpu, kvm) - if (vcpu_to_hv_vcpu(vcpu)->vp_index == vpidx) + if (vcpu_to_hv_vpindex(vcpu) == vpidx) return vcpu; return NULL; } @@ -377,9 +377,8 @@ static int syndbg_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host) break; } - trace_kvm_hv_syndbg_get_msr(vcpu->vcpu_id, - vcpu_to_hv_vcpu(vcpu)->vp_index, msr, - *pdata); + trace_kvm_hv_syndbg_get_msr(vcpu->vcpu_id, vcpu_to_hv_vpindex(vcpu), + msr, *pdata); return 0; } @@ -806,6 +805,9 @@ void kvm_hv_process_stimers(struct kvm_vcpu *vcpu) u64 time_now, exp_time; int i; + if (!hv_vcpu) + return; + for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) if (test_and_clear_bit(i, hv_vcpu->stimer_pending_bitmap)) { stimer = &hv_vcpu->stimer[i]; @@ -842,6 +844,9 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + if (!hv_vcpu) + return false; + if (!(hv_vcpu->hv_vapic & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE)) return false; return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED; @@ -1485,8 +1490,7 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask( bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS); kvm_for_each_vcpu(i, vcpu, kvm) { - if (test_bit(vcpu_to_hv_vcpu(vcpu)->vp_index, - (unsigned long *)vp_bitmap)) + if (test_bit(vcpu_to_hv_vpindex(vcpu), (unsigned long *)vp_bitmap)) __set_bit(i, vcpu_bitmap); } return vcpu_bitmap; diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 6300038e7a52..9ec7d686145a 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -78,6 +78,13 @@ static inline struct kvm_hv_syndbg *vcpu_to_hv_syndbg(struct kvm_vcpu *vcpu) return &vcpu->kvm->arch.hyperv.hv_syndbg; } +static inline u32 vcpu_to_hv_vpindex(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + + return hv_vcpu ? hv_vcpu->vp_index : kvm_vcpu_get_idx(vcpu); +} + int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host); int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host); @@ -116,6 +123,9 @@ static inline bool kvm_hv_has_stimer_pending(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + if (!hv_vcpu) + return false; + return !bitmap_empty(hv_vcpu->stimer_pending_bitmap, HV_SYNIC_STIMER_COUNT); } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 3136e05831cf..473c187263ca 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1245,7 +1245,8 @@ static int apic_set_eoi(struct kvm_lapic *apic) apic_clear_isr(vector, apic); apic_update_ppr(apic); - if (test_bit(vector, vcpu_to_synic(apic->vcpu)->vec_bitmap)) + if (vcpu_to_hv_vcpu(apic->vcpu) && + test_bit(vector, vcpu_to_synic(apic->vcpu)->vec_bitmap)) kvm_hv_synic_send_eoi(apic->vcpu, vector); kvm_ioapic_send_eoi(apic, vector); @@ -2512,7 +2513,8 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu) */ apic_clear_irr(vector, apic); - if (test_bit(vector, vcpu_to_synic(vcpu)->auto_eoi_bitmap)) { + if (vcpu_to_hv_vcpu(vcpu) && + test_bit(vector, vcpu_to_synic(vcpu)->auto_eoi_bitmap)) { /* * For auto-EOI interrupts, there might be another pending * interrupt above PPR, so check whether to raise another diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 7fe09b69a465..c19673a5b1bd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6734,12 +6734,10 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) /* All fields are clean at this point */ if (static_branch_unlikely(&enable_evmcs)) { - struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); - current_evmcs->hv_clean_fields |= HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; - current_evmcs->hv_vp_id = hv_vcpu->vp_index; + current_evmcs->hv_vp_id = vcpu_to_hv_vpindex(vcpu); } /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 30fbbf53ff1e..fa077b47c0ed 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8727,8 +8727,11 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu) if (!kvm_apic_hw_enabled(vcpu->arch.apic)) return; - bitmap_or((ulong *)eoi_exit_bitmap, vcpu->arch.ioapic_handled_vectors, - vcpu_to_synic(vcpu)->vec_bitmap, 256); + if (vcpu_to_hv_vcpu(vcpu)) + bitmap_or((ulong *)eoi_exit_bitmap, + vcpu->arch.ioapic_handled_vectors, + vcpu_to_synic(vcpu)->vec_bitmap, 256); + kvm_x86_ops.load_eoi_exitmap(vcpu, eoi_exit_bitmap); } From patchwork Wed Jan 13 14:37:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44CD3C433E9 for ; Wed, 13 Jan 2021 14:39:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 049652343B for ; Wed, 13 Jan 2021 14:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726931AbhAMOjG (ORCPT ); Wed, 13 Jan 2021 09:39:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29437 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726868AbhAMOjF (ORCPT ); Wed, 13 Jan 2021 09:39:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r0W8OW81Y0YZvDnYJ8uNeryDJKzNsrknOXDICtVsMuw=; b=fZDYeksoJWqxItkYstB0ED/F4r4Li5eldYuOIsKumaLwiozV3aP0thGqSBgMB+IvXLb/Ma tLVtyRoqn+Bz/0Seq8xtN0MLPo2kox6d71WW6tNBRWhtl6K2JEceMqAyxHmj1huh/QfD/y lR4Rz/n2NkGcNNv5vn4D/E7fuT9Fqoo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-279-IUUfH3PaNe-Vkv3klOdY7Q-1; Wed, 13 Jan 2021 09:37:36 -0500 X-MC-Unique: IUUfH3PaNe-Vkv3klOdY7Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 25742107ACFE; Wed, 13 Jan 2021 14:37:35 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24A345F708; Wed, 13 Jan 2021 14:37:32 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 5/7] KVM: x86: hyper-v: Allocate 'struct kvm_vcpu_hv' dynamically Date: Wed, 13 Jan 2021 15:37:19 +0100 Message-Id: <20210113143721.328594-6-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests). 'struct kvm_vcpu_hv' is, however, quite big, it accounts for more than 1/4 of the total 'struct kvm_vcpu_arch' which is also quite big already. This all looks like a waste. Allocate 'struct kvm_vcpu_hv' dynamically. This patch does not bring any (intentional) functional change as we still allocate the context unconditionally but it paves the way to doing that only when needed. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/hyperv.c | 16 ++++++++++++++-- arch/x86/kvm/hyperv.h | 13 ++++++------- arch/x86/kvm/x86.c | 7 +++++-- 4 files changed, 27 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3ab7b46087b7..94d00926b7ad 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -510,6 +510,7 @@ struct kvm_vcpu_hv_synic { /* Hyper-V per vcpu emulation context */ struct kvm_vcpu_hv { + struct kvm_vcpu *vcpu; u32 vp_index; u64 hv_vapic; s64 runtime_offset; @@ -717,7 +718,7 @@ struct kvm_vcpu_arch { /* used for guest single stepping over the given code position */ unsigned long singlestep_rip; - struct kvm_vcpu_hv hyperv; + struct kvm_vcpu_hv *hyperv; cpumask_var_t wbinvd_dirty_mask; diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 77deaadb8575..df7101b721e7 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -838,6 +838,9 @@ void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu) for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) stimer_cleanup(&hv_vcpu->stimer[i]); + + kfree(hv_vcpu); + vcpu->arch.hyperv = NULL; } bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu) @@ -887,16 +890,25 @@ static void stimer_init(struct kvm_vcpu_hv_stimer *stimer, int timer_index) stimer_prepare_msg(stimer); } -void kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) +int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) { - struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + struct kvm_vcpu_hv *hv_vcpu; int i; + hv_vcpu = kzalloc(sizeof(struct kvm_vcpu_hv), GFP_KERNEL_ACCOUNT); + if (!hv_vcpu) + return -ENOMEM; + + vcpu->arch.hyperv = hv_vcpu; + hv_vcpu->vcpu = vcpu; + synic_init(&hv_vcpu->synic); bitmap_zero(hv_vcpu->stimer_pending_bitmap, HV_SYNIC_STIMER_COUNT); for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) stimer_init(&hv_vcpu->stimer[i], i); + + return 0; } void kvm_hv_vcpu_postcreate(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 9ec7d686145a..a19e298463d0 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -52,20 +52,19 @@ static inline struct kvm_vcpu_hv *vcpu_to_hv_vcpu(struct kvm_vcpu *vcpu) { - return &vcpu->arch.hyperv; + return vcpu->arch.hyperv; } static inline struct kvm_vcpu *hv_vcpu_to_vcpu(struct kvm_vcpu_hv *hv_vcpu) { - struct kvm_vcpu_arch *arch; - - arch = container_of(hv_vcpu, struct kvm_vcpu_arch, hyperv); - return container_of(arch, struct kvm_vcpu, arch); + return hv_vcpu->vcpu; } static inline struct kvm_vcpu_hv_synic *vcpu_to_synic(struct kvm_vcpu *vcpu) { - return &vcpu->arch.hyperv.synic; + struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + + return &hv_vcpu->synic; } static inline struct kvm_vcpu *synic_to_vcpu(struct kvm_vcpu_hv_synic *synic) @@ -96,7 +95,7 @@ int kvm_hv_synic_set_irq(struct kvm *kvm, u32 vcpu_id, u32 sint); void kvm_hv_synic_send_eoi(struct kvm_vcpu *vcpu, int vector); int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages); -void kvm_hv_vcpu_init(struct kvm_vcpu *vcpu); +int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu); void kvm_hv_vcpu_postcreate(struct kvm_vcpu *vcpu); void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fa077b47c0ed..e08209f570f0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9990,11 +9990,12 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.pending_external_vector = -1; vcpu->arch.preempted_in_kernel = false; - kvm_hv_vcpu_init(vcpu); + if (kvm_hv_vcpu_init(vcpu)) + goto free_guest_fpu; r = kvm_x86_ops.vcpu_create(vcpu); if (r) - goto free_guest_fpu; + goto free_hv_vcpu; vcpu->arch.arch_capabilities = kvm_get_arch_capabilities(); vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT; @@ -10005,6 +10006,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu_put(vcpu); return 0; +free_hv_vcpu: + kvm_hv_vcpu_uninit(vcpu); free_guest_fpu: kvm_free_guest_fpu(vcpu); free_user_fpu: From patchwork Wed Jan 13 14:37:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CC2AC433E0 for ; Wed, 13 Jan 2021 14:39:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8219F2343B for ; Wed, 13 Jan 2021 14:39:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726985AbhAMOjI (ORCPT ); Wed, 13 Jan 2021 09:39:08 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:60924 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726868AbhAMOjI (ORCPT ); Wed, 13 Jan 2021 09:39:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NfNrpkKcMuKeMITHCt1E+wAp1t3V4G3JUZuVhHSG9Cw=; b=SSQde7VPY3P0zjld7dz3hmdoZGxheP6O5zgsWebYyPdfaL+5U2ccvN8eQoFrUPMkPiAs5+ mjm+Sr2NZIbpRBT8vbU1B7gDS6IqF7AJ+RgQfnhJQu6jcRPjTldBwpE5FNSsBjrAIu3eAg dTmIxx7CrV0EBtEzY9wTZC3/ge1agOY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-266-DPFeR_zFPN6s4ddo8Un6XA-1; Wed, 13 Jan 2021 09:37:38 -0500 X-MC-Unique: DPFeR_zFPN6s4ddo8Un6XA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2A7B1107ACFC; Wed, 13 Jan 2021 14:37:37 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 843AD5C3E4; Wed, 13 Jan 2021 14:37:35 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 6/7] KVM: x86: hyper-v: Make Hyper-V emulation enablement conditional Date: Wed, 13 Jan 2021 15:37:20 +0100 Message-Id: <20210113143721.328594-7-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hyper-V emulation is enabled in KVM unconditionally. This is bad at least from security standpoint as it is an extra attack surface. Ideally, there should be a per-VM capability explicitly enabled by VMM but currently it is not the case and we can't mandate one without breaking backwards compatibility. We can, however, check guest visible CPUIDs and only enable Hyper-V emulation when "Hv#1" interface was exposed in HYPERV_CPUID_INTERFACE. Note, VMMs are free to act in any sequence they like, e.g. they can try to set MSRs first and CPUIDs later so we still need to allow the host to read/write Hyper-V specific MSRs unconditionally. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/cpuid.c | 2 ++ arch/x86/kvm/hyperv.c | 27 +++++++++++++++++++++++---- arch/x86/kvm/hyperv.h | 3 ++- arch/x86/kvm/x86.c | 2 +- 5 files changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 94d00926b7ad..c27cbe3baccc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -718,6 +718,7 @@ struct kvm_vcpu_arch { /* used for guest single stepping over the given code position */ unsigned long singlestep_rip; + bool hyperv_enabled; struct kvm_vcpu_hv *hyperv; cpumask_var_t wbinvd_dirty_mask; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 13036cf0b912..3768491ee67d 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -181,6 +181,8 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); + kvm_hv_set_cpuid(vcpu); + /* Invoke the vendor callback only after the above state is updated. */ kvm_x86_ops.vcpu_after_set_cpuid(vcpu); } diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index df7101b721e7..81166401c353 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -36,6 +36,9 @@ #include "trace.h" #include "irq.h" +/* "Hv#1" signature */ +#define HYPERV_CPUID_SIGNATURE_EAX 0x31237648 + #define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64) static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer, @@ -1457,6 +1460,9 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host) { + if (!host && !vcpu->arch.hyperv_enabled) + return 1; + if (kvm_hv_msr_partition_wide(msr)) { int r; @@ -1470,6 +1476,9 @@ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host) int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host) { + if (!host && !vcpu->arch.hyperv_enabled) + return 1; + if (kvm_hv_msr_partition_wide(msr)) { int r; @@ -1684,9 +1693,20 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *current_vcpu, u64 ingpa, u64 outgpa, return HV_STATUS_SUCCESS; } -bool kvm_hv_hypercall_enabled(struct kvm *kvm) +void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu) +{ + struct kvm_cpuid_entry2 *entry; + + entry = kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_INTERFACE, 0); + if (entry && entry->eax == HYPERV_CPUID_SIGNATURE_EAX) + vcpu->arch.hyperv_enabled = true; + else + vcpu->arch.hyperv_enabled = false; +} + +bool kvm_hv_hypercall_enabled(struct kvm_vcpu *vcpu) { - return READ_ONCE(kvm->arch.hyperv.hv_guest_os_id) != 0; + return vcpu->arch.hyperv_enabled && vcpu->kvm->arch.hyperv.hv_guest_os_id; } static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result) @@ -2015,8 +2035,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, break; case HYPERV_CPUID_INTERFACE: - memcpy(signature, "Hv#1\0\0\0\0\0\0\0\0", 12); - ent->eax = signature[0]; + ent->eax = HYPERV_CPUID_SIGNATURE_EAX; break; case HYPERV_CPUID_VERSION: diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index a19e298463d0..070a301738ec 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -87,7 +87,7 @@ static inline u32 vcpu_to_hv_vpindex(struct kvm_vcpu *vcpu) int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host); int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host); -bool kvm_hv_hypercall_enabled(struct kvm *kvm); +bool kvm_hv_hypercall_enabled(struct kvm_vcpu *vcpu); int kvm_hv_hypercall(struct kvm_vcpu *vcpu); void kvm_hv_irq_routing_update(struct kvm *kvm); @@ -136,6 +136,7 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm, void kvm_hv_init_vm(struct kvm *kvm); void kvm_hv_destroy_vm(struct kvm *kvm); +void kvm_hv_set_cpuid(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args); int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 __user *entries); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e08209f570f0..58dd98b3c95c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8088,7 +8088,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) unsigned long nr, a0, a1, a2, a3, ret; int op_64_bit; - if (kvm_hv_hypercall_enabled(vcpu->kvm)) + if (kvm_hv_hypercall_enabled(vcpu)) return kvm_hv_hypercall(vcpu); nr = kvm_rax_read(vcpu); From patchwork Wed Jan 13 14:37:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12016931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A303EC433E6 for ; Wed, 13 Jan 2021 14:39:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E4562343B for ; Wed, 13 Jan 2021 14:39:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725771AbhAMOjJ (ORCPT ); Wed, 13 Jan 2021 09:39:09 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:31534 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726956AbhAMOjI (ORCPT ); Wed, 13 Jan 2021 09:39:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610548661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TkxVqwHz7SysghG9KVwLIy5r2NtnzWW/e+IgUngK7Dc=; b=PIOvlFwsAmZ6ENEbtfCHD73F2j3ZVIKDEkGXLj/1/1O5kxCCOi6GxyQMXn13phwuGYPoVf I13N4vd3Y5pwS+wob1eUzZWvFPjVv5DXTaxuUy0N3HsXpw2heAPxnqHCqhNKcreE6oBaIM jHyJYa4Uj6pZgkzAMpYBfHoDVngtQ0A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-43-Xl1XzafENLCuk2S3QgEbTA-1; Wed, 13 Jan 2021 09:37:40 -0500 X-MC-Unique: Xl1XzafENLCuk2S3QgEbTA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E25A8107ACF8; Wed, 13 Jan 2021 14:37:38 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.193.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 76D115C3E4; Wed, 13 Jan 2021 14:37:37 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson Subject: [PATCH 7/7] KVM: x86: hyper-v: Allocate Hyper-V context lazily Date: Wed, 13 Jan 2021 15:37:21 +0100 Message-Id: <20210113143721.328594-8-vkuznets@redhat.com> In-Reply-To: <20210113143721.328594-1-vkuznets@redhat.com> References: <20210113143721.328594-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests) so we don't actually need to allocate it in kvm_arch_vcpu_create(), we can postpone the action until Hyper-V specific MSRs are accessed or SynIC is enabled. Once allocated, let's keep the context alive for the lifetime of the vCPU as an attempt to free it would require additional synchronization with other vCPUs and normally it is not supposed to happen. Note, Hyper-V style hypercall enablement is done by writing to HV_X64_MSR_GUEST_OS_ID so we don't need to worry about allocating Hyper-V context from kvm_hv_hypercall(). Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/hyperv.c | 33 +++++++++++++++++++++++++-------- arch/x86/kvm/hyperv.h | 2 -- arch/x86/kvm/x86.c | 9 +-------- 3 files changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 81166401c353..9d52669409c5 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -839,6 +839,9 @@ void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu) struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); int i; + if (!hv_vcpu) + return; + for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) stimer_cleanup(&hv_vcpu->stimer[i]); @@ -893,7 +896,7 @@ static void stimer_init(struct kvm_vcpu_hv_stimer *stimer, int timer_index) stimer_prepare_msg(stimer); } -int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) +static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu; int i; @@ -911,19 +914,23 @@ int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu) for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) stimer_init(&hv_vcpu->stimer[i], i); + hv_vcpu->vp_index = kvm_vcpu_get_idx(vcpu); + return 0; } -void kvm_hv_vcpu_postcreate(struct kvm_vcpu *vcpu) +int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages) { - struct kvm_vcpu_hv *hv_vcpu = vcpu_to_hv_vcpu(vcpu); + struct kvm_vcpu_hv_synic *synic; + int r; - hv_vcpu->vp_index = kvm_vcpu_get_idx(vcpu); -} + if (!vcpu_to_hv_vcpu(vcpu)) { + r = kvm_hv_vcpu_init(vcpu); + if (r) + return r; + } -int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages) -{ - struct kvm_vcpu_hv_synic *synic = vcpu_to_synic(vcpu); + synic = vcpu_to_synic(vcpu); /* * Hyper-V SynIC auto EOI SINT's are @@ -1463,6 +1470,11 @@ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host) if (!host && !vcpu->arch.hyperv_enabled) return 1; + if (!vcpu_to_hv_vcpu(vcpu)) { + if (kvm_hv_vcpu_init(vcpu)) + return 1; + } + if (kvm_hv_msr_partition_wide(msr)) { int r; @@ -1479,6 +1491,11 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host) if (!host && !vcpu->arch.hyperv_enabled) return 1; + if (!vcpu_to_hv_vcpu(vcpu)) { + if (kvm_hv_vcpu_init(vcpu)) + return 1; + } + if (kvm_hv_msr_partition_wide(msr)) { int r; diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 070a301738ec..4352d3164636 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -95,8 +95,6 @@ int kvm_hv_synic_set_irq(struct kvm *kvm, u32 vcpu_id, u32 sint); void kvm_hv_synic_send_eoi(struct kvm_vcpu *vcpu, int vector); int kvm_hv_activate_synic(struct kvm_vcpu *vcpu, bool dont_zero_synic_pages); -int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu); -void kvm_hv_vcpu_postcreate(struct kvm_vcpu *vcpu); void kvm_hv_vcpu_uninit(struct kvm_vcpu *vcpu); bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 58dd98b3c95c..73243cc7d029 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9990,12 +9990,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.pending_external_vector = -1; vcpu->arch.preempted_in_kernel = false; - if (kvm_hv_vcpu_init(vcpu)) - goto free_guest_fpu; - r = kvm_x86_ops.vcpu_create(vcpu); if (r) - goto free_hv_vcpu; + goto free_guest_fpu; vcpu->arch.arch_capabilities = kvm_get_arch_capabilities(); vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT; @@ -10006,8 +10003,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu_put(vcpu); return 0; -free_hv_vcpu: - kvm_hv_vcpu_uninit(vcpu); free_guest_fpu: kvm_free_guest_fpu(vcpu); free_user_fpu: @@ -10031,8 +10026,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; - kvm_hv_vcpu_postcreate(vcpu); - if (mutex_lock_killable(&vcpu->mutex)) return; vcpu_load(vcpu);