From patchwork Thu Apr 7 15:56:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 12805441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A26E5C433F5 for ; Thu, 7 Apr 2022 15:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345237AbiDGP64 (ORCPT ); Thu, 7 Apr 2022 11:58:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbiDGP6y (ORCPT ); Thu, 7 Apr 2022 11:58:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6CD83CB00E for ; Thu, 7 Apr 2022 08:56:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649347010; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=ya5d+RjQK3Z3El+g73hrAwpqUWnzBiSNtM//nVIEiz0=; b=h2gTffa89RV3izRZABh34i/NdSapjucvCV/IUp8uF3BVb83EP2tIXCMZhV2F+Mf71a39dL fC5b4Ul/nlqyw1HBkQXYJsfErAW1gwtfECRVv4upelGlj+Zjb81SOmk9Bsoxil5abNxtSa W1rxzAt2xBmavCyON+elD81t/DMBDDk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-141-v99SXbN4MYC-SjvqUJ4PEA-1; Thu, 07 Apr 2022 11:56:49 -0400 X-MC-Unique: v99SXbN4MYC-SjvqUJ4PEA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BFB791C09041; Thu, 7 Apr 2022 15:56:48 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.40.192.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90504427852; Thu, 7 Apr 2022 15:56:46 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-kernel@vger.kernel.org Subject: [PATCH v2 00/31] KVM: x86: hyper-v: Fine-grained TLB flush + Direct TLB flush feature Date: Thu, 7 Apr 2022 17:56:14 +0200 Message-Id: <20220407155645.940890-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Changes since v1: - Move '#include "svm.h"' to PATCH10 to avoid interim build breakage through the series. - Fix crash from nested_vmx_free_vcpu() when nested_release_evmcs() is called while 'to_hv_vcpu() == NULL'. Original description: Currently, KVM handles HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} requests by flushing the whole VPID and this is sub-optimal. This series introduces the required mechanism to make handling of these requests more fine-grained by flushing individual GVAs only (when requested). On this foundation, "Direct Virtual Flush" Hyper-V feature is implemented. The feature allows L0 to handle Hyper-V TLB flush hypercalls directly at L0 without the need to reflect the exit to L1. This has at least two benefits: reflecting vmexit and the consequent vmenter are avoided + L0 has precise information whether the target vCPU is actually running (and thus requires a kick). Vitaly Kuznetsov (31): KVM: x86: hyper-v: Resurrect dedicated KVM_REQ_HV_TLB_FLUSH flag KVM: x86: hyper-v: Introduce TLB flush ring KVM: x86: hyper-v: Handle HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls gently KVM: x86: hyper-v: Expose support for extended gva ranges for flush hypercalls KVM: x86: Prepare kvm_hv_flush_tlb() to handle L2's GPAs KVM: x86: hyper-v: Don't use sparse_set_to_vcpu_mask() in kvm_hv_send_ipi() KVM: x86: hyper-v: Create a separate ring for Direct TLB flush KVM: x86: hyper-v: Use preallocated buffer in 'struct kvm_vcpu_hv' instead of on-stack 'sparse_banks' KVM: nVMX: Keep track of hv_vm_id/hv_vp_id when eVMCS is in use KVM: nSVM: Keep track of Hyper-V hv_vm_id/hv_vp_id KVM: x86: Introduce .post_hv_direct_flush() nested hook KVM: x86: hyper-v: Introduce kvm_hv_is_tlb_flush_hcall() KVM: x86: hyper-v: Direct TLB flush KVM: x86: hyper-v: Introduce fast kvm_hv_direct_tlb_flush_exposed() check x86/hyperv: Fix 'struct hv_enlightened_vmcs' definition KVM: nVMX: hyper-v: Direct TLB flush KVM: x86: KVM_REQ_TLB_FLUSH_CURRENT is a superset of KVM_REQ_HV_TLB_FLUSH too KVM: nSVM: hyper-v: Direct TLB flush KVM: x86: Expose Hyper-V Direct TLB flush feature KVM: selftests: add hyperv_svm_test to .gitignore KVM: selftests: Better XMM read/write helpers KVM: selftests: Hyper-V PV IPI selftest KVM: selftests: Make it possible to replace PTEs with __virt_pg_map() KVM: selftests: Hyper-V PV TLB flush selftest KVM: selftests: Sync 'struct hv_enlightened_vmcs' definition with hyperv-tlfs.h KVM: selftests: nVMX: Allocate Hyper-V partition assist page KVM: selftests: nSVM: Allocate Hyper-V partition assist and VP assist pages KVM: selftests: Sync 'struct hv_vp_assist_page' definition with hyperv-tlfs.h KVM: selftests: evmcs_test: Direct TLB flush test KVM: selftests: Move Hyper-V VP assist page enablement out of evmcs.h KVM: selftests: hyperv_svm_test: Add Direct TLB flush test arch/x86/include/asm/hyperv-tlfs.h | 6 +- arch/x86/include/asm/kvm_host.h | 30 + arch/x86/kvm/Makefile | 3 +- arch/x86/kvm/hyperv.c | 305 ++++++++- arch/x86/kvm/hyperv.h | 55 ++ arch/x86/kvm/svm/hyperv.c | 18 + arch/x86/kvm/svm/hyperv.h | 37 + arch/x86/kvm/svm/nested.c | 25 +- arch/x86/kvm/trace.h | 21 +- arch/x86/kvm/vmx/evmcs.c | 24 + arch/x86/kvm/vmx/evmcs.h | 4 + arch/x86/kvm/vmx/nested.c | 32 + arch/x86/kvm/x86.c | 15 +- arch/x86/kvm/x86.h | 1 + tools/testing/selftests/kvm/.gitignore | 3 + tools/testing/selftests/kvm/Makefile | 4 +- .../selftests/kvm/include/x86_64/evmcs.h | 40 +- .../selftests/kvm/include/x86_64/hyperv.h | 35 + .../selftests/kvm/include/x86_64/processor.h | 72 +- .../selftests/kvm/include/x86_64/svm_util.h | 10 + .../selftests/kvm/include/x86_64/vmx.h | 4 + .../testing/selftests/kvm/lib/x86_64/hyperv.c | 21 + .../selftests/kvm/lib/x86_64/processor.c | 6 +- tools/testing/selftests/kvm/lib/x86_64/svm.c | 10 + tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 + .../selftests/kvm/max_guest_memory_test.c | 2 +- .../testing/selftests/kvm/x86_64/evmcs_test.c | 53 +- .../selftests/kvm/x86_64/hyperv_features.c | 5 +- .../testing/selftests/kvm/x86_64/hyperv_ipi.c | 362 ++++++++++ .../selftests/kvm/x86_64/hyperv_svm_test.c | 60 +- .../selftests/kvm/x86_64/hyperv_tlb_flush.c | 647 ++++++++++++++++++ .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- 32 files changed, 1797 insertions(+), 122 deletions(-) create mode 100644 arch/x86/kvm/svm/hyperv.c create mode 100644 tools/testing/selftests/kvm/lib/x86_64/hyperv.c create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_ipi.c create mode 100644 tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c