From patchwork Tue Mar 1 18:26:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4BF3C433F5 for ; Tue, 1 Mar 2022 18:27:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2BCFB10E6E9; Tue, 1 Mar 2022 18:27:25 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2355310E722 for ; Tue, 1 Mar 2022 18:27:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IDLL0J2wmfUo2QHEAi6G+1bVWCJViWSogr2BTd1oZqc=; b=Zgp/w+uTzkAD1Obtx0mVcwBvRSawxGV/BMSUWyT4lE6JHKxSSrxURSa693CassKtQZ4lPb XWnaL6OiRyc5/EQKttP4kgBMfdW88u76BF4FDDWJqANdZppScPvm1P8VnEYevSCA2jN58D 38TpLDi67SE/3iw7dMRTMh3SRpZDjoE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-368-GTeinlubPomyZouasXbaPQ-1; Tue, 01 Mar 2022 13:27:20 -0500 X-MC-Unique: GTeinlubPomyZouasXbaPQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 60F48FC80; Tue, 1 Mar 2022 18:27:17 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA8B086C4D; Tue, 1 Mar 2022 18:27:10 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 01/11] KVM: x86: SVM: move nested_npt_enabled to svm.h Date: Tue, 1 Mar 2022 20:26:29 +0200 Message-Id: <20220301182639.559568-2-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It will be used in other places Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 5 ----- arch/x86/kvm/svm/svm.h | 9 +++++++++ 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 96bab464967f2..62cda8ae71bbc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -454,11 +454,6 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm, vmcb12->control.exit_int_info = exit_int_info; } -static inline bool nested_npt_enabled(struct vcpu_svm *svm) -{ - return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; -} - static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 70850cbe5bcb5..c8dedc4a068d2 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -509,6 +509,11 @@ void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode, #define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */ #define NESTED_EXIT_CONTINUE 2 /* Further checks needed */ +static inline bool nested_npt_enabled(struct vcpu_svm *svm) +{ + return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; +} + static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -626,4 +631,8 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm); void __svm_sev_es_vcpu_run(unsigned long vmcb_pa); void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); + /* svm.c */ + #define MSR_INVALID 0xffffffffU + + #endif From patchwork Tue Mar 1 18:26:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DCD1C433F5 for ; Tue, 1 Mar 2022 18:27:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 480DF10E769; Tue, 1 Mar 2022 18:27:33 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A1D010E720 for ; Tue, 1 Mar 2022 18:27:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xe7mzF67C8/j22oigWqxz9CvwN7UQJwMtwuK4nLwaug=; b=TbZpx6Cv5CJ5V5oCHGAZhb002XP5pVTbijmjJeVpqtL+fviNRRW5rfAIbC+LdS+QbnvJnr T87T0b0FnFfXHy9X9TEBVA88kUxKYbfIfo9ZvsqZVInanCYr6DGd7j5iM7BuLEvitsxIJ0 1Gu9f1EahECjfcpFVDBrvJVqkZ+gVgY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-283-GczLqR77NlqRgfArR0taIQ-1; Tue, 01 Mar 2022 13:27:27 -0500 X-MC-Unique: GczLqR77NlqRgfArR0taIQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 66BD81854E21; Tue, 1 Mar 2022 18:27:24 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id CBC3C86C49; Tue, 1 Mar 2022 18:27:17 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 02/11] KVM: x86: SVM: allow AVIC to co-exist with a nested guest running Date: Tue, 1 Mar 2022 20:26:30 +0200 Message-Id: <20220301182639.559568-3-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Inhibit the AVIC of the vCPU that is running nested for the duration of the nested run, so that all interrupts arriving from both its vCPU siblings and from KVM are delivered using normal IPIs and cause that vCPU to vmexit. Note that unlike normal AVIC inhibition, there is no need to update the AVIC mmio memslot, because the nested guest uses its own set of paging tables. That also means that AVIC doesn't need to be inhibited VM wide. Note that in the theory when a nested guest doesn't intercept physical interrupts, we could continue using AVIC to deliver them to it but don't bother doing so for now. Plus when nested AVIC is implemented, the nested guest will likely use it, which will not allow this optimization to be used (can't use real AVIC to support both L1 and L2 at the same time) Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 7 ++++++- arch/x86/kvm/svm/avic.c | 6 +++++- arch/x86/kvm/svm/nested.c | 15 ++++++++++----- arch/x86/kvm/svm/svm.c | 31 +++++++++++++++++++----------- arch/x86/kvm/svm/svm.h | 1 + arch/x86/kvm/x86.c | 15 +++++++++++++-- 7 files changed, 56 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 29affccb353cd..eb16e32117610 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -126,6 +126,7 @@ KVM_X86_OP_OPTIONAL(migrate_timers) KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) +KVM_X86_OP_OPTIONAL_RET0(vcpu_has_apicv_inhibit_condition); #undef KVM_X86_OP #undef KVM_X86_OP_OPTIONAL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ccec837e520d8..efe7414361de8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1039,7 +1039,6 @@ struct kvm_x86_msr_filter { #define APICV_INHIBIT_REASON_DISABLE 0 #define APICV_INHIBIT_REASON_HYPERV 1 -#define APICV_INHIBIT_REASON_NESTED 2 #define APICV_INHIBIT_REASON_IRQWIN 3 #define APICV_INHIBIT_REASON_PIT_REINJ 4 #define APICV_INHIBIT_REASON_X2APIC 5 @@ -1490,6 +1489,12 @@ struct kvm_x86_ops { int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector); + + /* + * Returns true if for some reason APICv (e.g guest mode) + * must be inhibited on this vCPU + */ + bool (*vcpu_has_apicv_inhibit_condition)(struct kvm_vcpu *vcpu); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index aea0b13773fd3..d5ce0868c5a74 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -357,6 +357,11 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu) return 1; } +bool avic_has_vcpu_inhibit_condition(struct kvm_vcpu *vcpu) +{ + return is_guest_mode(vcpu); +} + static u32 *avic_get_logical_id_entry(struct kvm_vcpu *vcpu, u32 ldr, bool flat) { struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); @@ -859,7 +864,6 @@ bool avic_check_apicv_inhibit_reasons(ulong bit) ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) | BIT(APICV_INHIBIT_REASON_ABSENT) | BIT(APICV_INHIBIT_REASON_HYPERV) | - BIT(APICV_INHIBIT_REASON_NESTED) | BIT(APICV_INHIBIT_REASON_IRQWIN) | BIT(APICV_INHIBIT_REASON_PIT_REINJ) | BIT(APICV_INHIBIT_REASON_X2APIC) | diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 62cda8ae71bbc..6dffa6c661493 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -575,11 +575,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes. */ - /* - * Also covers avic_vapic_bar, avic_backing_page, avic_logical_id, - * avic_physical_id. - */ - WARN_ON(kvm_apicv_activated(svm->vcpu.kvm)); /* Copied from vmcb01. msrpm_base can be overwritten later. */ svm->vmcb->control.nested_ctl = svm->vmcb01.ptr->control.nested_ctl; @@ -683,6 +678,9 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, svm_set_gif(svm, true); + if (kvm_vcpu_apicv_active(vcpu)) + kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu); + return 0; } @@ -947,6 +945,13 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (unlikely(svm->vmcb->save.rflags & X86_EFLAGS_TF)) kvm_queue_exception(&(svm->vcpu), DB_VECTOR); + /* + * Un-inhibit the AVIC right away, so that other vCPUs can start + * to benefit from VM-exit less IPI right away + */ + if (kvm_apicv_activated(vcpu->kvm)) + kvm_vcpu_update_apicv(vcpu); + return 0; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7038c76fa8410..08ccf0db91f72 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1393,7 +1393,8 @@ static void svm_set_vintr(struct vcpu_svm *svm) /* * The following fields are ignored when AVIC is enabled */ - WARN_ON(kvm_apicv_activated(svm->vcpu.kvm)); + if (!is_guest_mode(&svm->vcpu)) + WARN_ON(kvm_apicv_activated(svm->vcpu.kvm)); svm_set_intercept(svm, INTERCEPT_VINTR); @@ -2899,10 +2900,16 @@ static int interrupt_window_interception(struct kvm_vcpu *vcpu) svm_clear_vintr(to_svm(vcpu)); /* - * For AVIC, the only reason to end up here is ExtINTs. + * If not running nested, for AVIC, the only reason to end up here is ExtINTs. * In this case AVIC was temporarily disabled for * requesting the IRQ window and we have to re-enable it. + * + * If running nested, still uninhibit the AVIC in case irq window + * was requested when it was not running nested. + * All vCPUs which run nested will have their AVIC still + * inhibited due to AVIC inhibition override for that. */ + kvm_request_apicv_update(vcpu->kvm, true, APICV_INHIBIT_REASON_IRQWIN); ++vcpu->stat.irq_window_exits; @@ -3500,8 +3507,16 @@ static void svm_enable_irq_window(struct kvm_vcpu *vcpu) * unless we have pending ExtINT since it cannot be injected * via AVIC. In such case, we need to temporarily disable AVIC, * and fallback to injecting IRQ via V_IRQ. + * + * If running nested, this vCPU will use separate page tables + * which don't have L1's AVIC mapped, and the AVIC is + * already inhibited thus there is no need for global + * AVIC inhibition. */ - kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_IRQWIN); + + if (!is_guest_mode(vcpu)) + kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_IRQWIN); + svm_set_vintr(svm); } } @@ -3956,14 +3971,6 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) if (guest_cpuid_has(vcpu, X86_FEATURE_X2APIC)) kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_X2APIC); - - /* - * Currently, AVIC does not work with nested virtualization. - * So, we disable AVIC when cpuid for SVM is set in the L1 guest. - */ - if (nested && guest_cpuid_has(vcpu, X86_FEATURE_SVM)) - kvm_request_apicv_update(vcpu->kvm, false, - APICV_INHIBIT_REASON_NESTED); } init_vmcb_after_set_cpuid(vcpu); } @@ -4625,6 +4632,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .complete_emulated_msr = svm_complete_emulated_msr, .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, + .vcpu_has_apicv_inhibit_condition = avic_has_vcpu_inhibit_condition, }; /* @@ -4808,6 +4816,7 @@ static __init int svm_hardware_setup(void) } else { svm_x86_ops.vcpu_blocking = NULL; svm_x86_ops.vcpu_unblocking = NULL; + svm_x86_ops.vcpu_has_apicv_inhibit_condition = NULL; } if (vls) { diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c8dedc4a068d2..3ef2681244e84 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -595,6 +595,7 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq, void avic_vcpu_blocking(struct kvm_vcpu *vcpu); void avic_vcpu_unblocking(struct kvm_vcpu *vcpu); void avic_ring_doorbell(struct kvm_vcpu *vcpu); +bool avic_has_vcpu_inhibit_condition(struct kvm_vcpu *vcpu); /* sev.c */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c712c33c1521f..14b964eb079e7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9681,6 +9681,11 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC); } +static bool vcpu_has_apicv_inhibit_condition(struct kvm_vcpu *vcpu) +{ + return static_call(kvm_x86_vcpu_has_apicv_inhibit_condition)(vcpu); +} + void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) { bool activate; @@ -9690,7 +9695,9 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) down_read(&vcpu->kvm->arch.apicv_update_lock); - activate = kvm_apicv_activated(vcpu->kvm); + activate = kvm_apicv_activated(vcpu->kvm) && + !vcpu_has_apicv_inhibit_condition(vcpu); + if (vcpu->arch.apicv_active == activate) goto out; @@ -10091,7 +10098,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * per-VM state, and responsing vCPUs must wait for the update * to complete before servicing KVM_REQ_APICV_UPDATE. */ - WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); + if (vcpu_has_apicv_inhibit_condition(vcpu)) + WARN_ON_ONCE(kvm_vcpu_apicv_active(vcpu)); + else + WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); + exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu); if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST)) From patchwork Tue Mar 1 18:26:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CA39C433EF for ; Tue, 1 Mar 2022 18:27:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 295ED10E73E; Tue, 1 Mar 2022 18:27:40 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8679210E784 for ; Tue, 1 Mar 2022 18:27:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159257; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CvubZPVQKB5af4rhjc5K8OwkIVLptlwGQ6AYm8OEcgk=; b=Nb0K/VZ/NuWjpQRb/bMRIU21eMkk3Utc4ojoszrO0pEDLIcU8xHpQEDAHcezO3TpY2M+Zw n1dFe8/jL3FFs7shwRvo8X1QaYC7jmW5OJf0oV2mGVcpzxNy5RNEvjrJt/gWY9NNMjgLLL vkOefFcPm1muouK+G3EDMKjE3BNqkqY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-kEnmD9EnPjqoRwPYjhMZHA-1; Tue, 01 Mar 2022 13:27:34 -0500 X-MC-Unique: kEnmD9EnPjqoRwPYjhMZHA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3807E824FA6; Tue, 1 Mar 2022 18:27:31 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id D194686C42; Tue, 1 Mar 2022 18:27:24 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 03/11] KVM: x86: mmu: allow to enable write tracking externally Date: Tue, 1 Mar 2022 20:26:31 +0200 Message-Id: <20220301182639.559568-4-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This will be used to enable write tracking from nested AVIC code and can also be used to enable write tracking in GVT-g module when it actually uses it as opposed to always enabling it, when the module is compiled in the kernel. No functional change intended. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/include/asm/kvm_page_track.h | 1 + arch/x86/kvm/mmu.h | 8 +++++--- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- arch/x86/kvm/mmu/page_track.c | 10 ++++++++-- 5 files changed, 24 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index efe7414361de8..83f734e201e24 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1222,7 +1222,7 @@ struct kvm_arch { * is used as one input when determining whether certain memslot * related allocations are necessary. */ - bool shadow_root_allocated; + bool mmu_page_tracking_enabled; #if IS_ENABLED(CONFIG_HYPERV) hpa_t hv_root_tdp; diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index eb186bc57f6a9..955a5ae07b10e 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -50,6 +50,7 @@ int kvm_page_track_init(struct kvm *kvm); void kvm_page_track_cleanup(struct kvm *kvm); bool kvm_page_track_write_tracking_enabled(struct kvm *kvm); +int kvm_page_track_write_tracking_enable(struct kvm *kvm); int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot); void kvm_page_track_free_memslot(struct kvm_memory_slot *slot); diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 1d0c1904d69a3..023b192637078 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -268,7 +268,7 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); int kvm_mmu_post_init_vm(struct kvm *kvm); void kvm_mmu_pre_destroy_vm(struct kvm *kvm); -static inline bool kvm_shadow_root_allocated(struct kvm *kvm) +static inline bool mmu_page_tracking_enabled(struct kvm *kvm) { /* * Read shadow_root_allocated before related pointers. Hence, threads @@ -276,9 +276,11 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm) * see the pointers. Pairs with smp_store_release in * mmu_first_shadow_root_alloc. */ - return smp_load_acquire(&kvm->arch.shadow_root_allocated); + return smp_load_acquire(&kvm->arch.mmu_page_tracking_enabled); } +int mmu_enable_write_tracking(struct kvm *kvm); + #ifdef CONFIG_X86_64 static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } #else @@ -287,7 +289,7 @@ static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !is_tdp_mmu_enabled(kvm) || mmu_page_tracking_enabled(kvm); } static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b2c1c4eb60070..0368ef3fe582e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3365,7 +3365,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) return r; } -static int mmu_first_shadow_root_alloc(struct kvm *kvm) +int mmu_enable_write_tracking(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *slot; @@ -3375,21 +3375,20 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) * Check if this is the first shadow root being allocated before * taking the lock. */ - if (kvm_shadow_root_allocated(kvm)) + if (mmu_page_tracking_enabled(kvm)) return 0; mutex_lock(&kvm->slots_arch_lock); /* Recheck, under the lock, whether this is the first shadow root. */ - if (kvm_shadow_root_allocated(kvm)) + if (mmu_page_tracking_enabled(kvm)) goto out_unlock; /* * Check if anything actually needs to be allocated, e.g. all metadata * will be allocated upfront if TDP is disabled. */ - if (kvm_memslots_have_rmaps(kvm) && - kvm_page_track_write_tracking_enabled(kvm)) + if (kvm_memslots_have_rmaps(kvm) && mmu_page_tracking_enabled(kvm)) goto out_success; for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { @@ -3419,7 +3418,7 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) * all the related pointers are set. */ out_success: - smp_store_release(&kvm->arch.shadow_root_allocated, true); + smp_store_release(&kvm->arch.mmu_page_tracking_enabled, true); out_unlock: mutex_unlock(&kvm->slots_arch_lock); @@ -3456,7 +3455,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } - r = mmu_first_shadow_root_alloc(vcpu->kvm); + r = mmu_enable_write_tracking(vcpu->kvm); if (r) return r; @@ -5692,6 +5691,9 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + + if (IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) || !tdp_enabled) + mmu_enable_write_tracking(kvm); } void kvm_mmu_uninit_vm(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 68eb1fb548b61..ce5735909e74c 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -21,10 +21,16 @@ bool kvm_page_track_write_tracking_enabled(struct kvm *kvm) { - return IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) || - !tdp_enabled || kvm_shadow_root_allocated(kvm); + return mmu_page_tracking_enabled(kvm); } +int kvm_page_track_write_tracking_enable(struct kvm *kvm) +{ + return mmu_enable_write_tracking(kvm); +} +EXPORT_SYMBOL_GPL(kvm_page_track_write_tracking_enable); + + void kvm_page_track_free_memslot(struct kvm_memory_slot *slot) { int i; From patchwork Tue Mar 1 18:26:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 946B5C433F5 for ; Tue, 1 Mar 2022 18:27:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 887FF10E83E; Tue, 1 Mar 2022 18:27:48 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id EF09710E79C for ; Tue, 1 Mar 2022 18:27:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159266; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ewdicsLWPzXfBMC9hKfnWlF0kfyJRGxBWt4X1H48+7Q=; b=FKR6ljtUa2K3A9c3JcXyb18wR1TcLcbnbH29NGk2wPCh08/HcauK5OsJPW9mNEYrkz6Y/4 g27Iw02WXP1qDrsnTFYhPibwvZmI2rU5yxJUav7inGaAxtwerj9d7ViiqqjTvWi+1dNeW0 3jBg742kTHlGrXZi9gOgJiCHTwMmKCU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-345-MH400OFRPKuW55nJtQOaVg-1; Tue, 01 Mar 2022 13:27:41 -0500 X-MC-Unique: MH400OFRPKuW55nJtQOaVg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0B0561006AA5; Tue, 1 Mar 2022 18:27:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2F1A86C51; Tue, 1 Mar 2022 18:27:31 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 04/11] x86: KVMGT: use kvm_page_track_write_tracking_enable Date: Tue, 1 Mar 2022 20:26:32 +0200 Message-Id: <20220301182639.559568-5-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This allows to enable the write tracking only when KVMGT is actually used and doesn't carry any penalty otherwise. Tested by booting a VM with a kvmgt mdev device. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/Kconfig | 3 --- arch/x86/kvm/mmu/mmu.c | 2 +- drivers/gpu/drm/i915/Kconfig | 1 - drivers/gpu/drm/i915/gvt/kvmgt.c | 5 +++++ 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index e3cbd77061364..41341905d3734 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -126,7 +126,4 @@ config KVM_XEN If in doubt, say "N". -config KVM_EXTERNAL_WRITE_TRACKING - bool - endif # VIRTUALIZATION diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0368ef3fe582e..ba98551f0026d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5692,7 +5692,7 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); - if (IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) || !tdp_enabled) + if (!tdp_enabled) mmu_enable_write_tracking(kvm); } diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index a4c94dc2e2164..8bea99622dd58 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -126,7 +126,6 @@ config DRM_I915_GVT_KVMGT depends on DRM_I915_GVT depends on KVM depends on VFIO_MDEV - select KVM_EXTERNAL_WRITE_TRACKING default n help Choose this option if you want to enable KVMGT support for diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index 20b82fb036f8c..64ced3c2bc550 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -1916,6 +1916,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev) struct intel_vgpu *vgpu; struct kvmgt_vdev *vdev; struct kvm *kvm; + int ret; vgpu = mdev_get_drvdata(mdev); if (handle_valid(vgpu->handle)) @@ -1931,6 +1932,10 @@ static int kvmgt_guest_init(struct mdev_device *mdev) if (__kvmgt_vgpu_exist(vgpu, kvm)) return -EEXIST; + ret = kvm_page_track_write_tracking_enable(kvm); + if (ret) + return ret; + info = vzalloc(sizeof(struct kvmgt_guest_info)); if (!info) return -ENOMEM; From patchwork Tue Mar 1 18:26:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0DD6C433EF for ; Tue, 1 Mar 2022 18:27:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CCEF910E786; Tue, 1 Mar 2022 18:27:52 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 847EE10E720 for ; Tue, 1 Mar 2022 18:27:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159270; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cfQFjWIyDl2bmrrDOsLZBx2orDmE+um/XvjO0hUtLH0=; b=gkyhmaFc4GPbSk8Klx28mklqCQQTcARcEm4LNnDTYgkVCfSfZCA5uHvRJddRICRJloRLnD Nip+9ht3XWIcFOgK36rQz+20AO8w80e51yh3oVHAuUuyiE6rd7DpwtHkgMgyWVIVykHWDK 3Q1YISolzKBaXKPA7ScN4ECuXxkXZuk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-558-VQBftEdxOFeYrpw5uke10Q-1; Tue, 01 Mar 2022 13:27:47 -0500 X-MC-Unique: VQBftEdxOFeYrpw5uke10Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D0728FC82; Tue, 1 Mar 2022 18:27:44 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7593D86C51; Tue, 1 Mar 2022 18:27:38 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 05/11] KVM: x86: mmu: add gfn_in_memslot helper Date: Tue, 1 Mar 2022 20:26:33 +0200 Message-Id: <20220301182639.559568-6-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This is a tiny refactoring, and can be useful to check if a GPA/GFN is within a memslot a bit more cleanly. Signed-off-by: Maxim Levitsky --- include/linux/kvm_host.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f11039944c08f..c32bfe0e22b80 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1574,6 +1574,13 @@ int kvm_request_irq_source_id(struct kvm *kvm); void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id); bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args); + +static inline bool gfn_in_memslot(struct kvm_memory_slot *slot, gfn_t gfn) +{ + return (gfn >= slot->base_gfn && gfn < slot->base_gfn + slot->npages); +} + + /* * Returns a pointer to the memslot if it contains gfn. * Otherwise returns NULL. @@ -1584,12 +1591,13 @@ try_get_memslot(struct kvm_memory_slot *slot, gfn_t gfn) if (!slot) return NULL; - if (gfn >= slot->base_gfn && gfn < slot->base_gfn + slot->npages) + if (gfn_in_memslot(slot, gfn)) return slot; else return NULL; } + /* * Returns a pointer to the memslot that contains gfn. Otherwise returns NULL. * From patchwork Tue Mar 1 18:26:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB6EDC433EF for ; Tue, 1 Mar 2022 18:28:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DAA6110E720; Tue, 1 Mar 2022 18:28:01 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2A88110E798 for ; Tue, 1 Mar 2022 18:28:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159279; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pWwqXFMojK6l0GX9CplhMLt5g8KdQtN+Qa+5duN8pyQ=; b=M9EzmxVlqwcfqIYJgOkwmpeoVuCowBjZHGRIXW3Espnx6YRQsCm+TUjA2PMMhqsBFk03xy WE905G6cbj5XZeRDmQo3uRXZ1hRjh8zfNXN/pAW2+RGQCGNqDMC1vs0N056YtHGhnEs0Pk fj6SCohFpXBmxeNAtAX6EzbV2upA03g= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-673-9aCjjUiIOOybODwI7bfHFA-1; Tue, 01 Mar 2022 13:27:56 -0500 X-MC-Unique: 9aCjjUiIOOybODwI7bfHFA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1E3BD81424A; Tue, 1 Mar 2022 18:27:53 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id B216D86C51; Tue, 1 Mar 2022 18:27:45 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 06/11] KVM: x86: lapic: don't allow to change APIC ID when apic acceleration is enabled Date: Tue, 1 Mar 2022 20:26:34 +0200 Message-Id: <20220301182639.559568-7-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" No normal guest has any reason to change physical APIC IDs, and allowing this introduces bugs into APIC acceleration code. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/lapic.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 80a2020c4db40..ffb5fc6449bc5 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -2042,10 +2042,20 @@ static int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val) switch (reg) { case APIC_ID: /* Local APIC ID */ - if (!apic_x2apic_mode(apic)) - kvm_apic_set_xapic_id(apic, val >> 24); - else + if (apic_x2apic_mode(apic)) { ret = 1; + break; + } + /* + * Don't allow setting APIC ID with any APIC acceleration + * enabled to avoid unexpected issues + */ + if (enable_apicv && ((val >> 24) != apic->vcpu->vcpu_id)) { + kvm_vm_bugged(apic->vcpu->kvm); + break; + } + + kvm_apic_set_xapic_id(apic, val >> 24); break; case APIC_TASKPRI: @@ -2613,8 +2623,16 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu) static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s, bool set) { - if (apic_x2apic_mode(vcpu->arch.apic)) { - u32 *id = (u32 *)(s->regs + APIC_ID); + u32 *id = (u32 *)(s->regs + APIC_ID); + + if (!apic_x2apic_mode(vcpu->arch.apic)) { + /* Don't allow setting APIC ID with any APIC acceleration + * enabled to avoid unexpected issues + */ + if (enable_apicv && (*id >> 24) != vcpu->vcpu_id) + return -EINVAL; + } else { + u32 *ldr = (u32 *)(s->regs + APIC_LDR); u64 icr; From patchwork Tue Mar 1 18:26:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53F2BC433F5 for ; Tue, 1 Mar 2022 18:28:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BD9D10E79D; Tue, 1 Mar 2022 18:28:25 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6580110E79D for ; Tue, 1 Mar 2022 18:28:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159302; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RciSJIldoTiMXM2AhqFSNwGmbuOFshdyAhmjEosT1+M=; b=JYbIcGD1WCrWLn58cJl0ol6aoieYmkAswCmlfgxa9qY9KJBSynFZJAbetntkNTd47CRyIb k4mi0Q7zOCNvBr5VNo5GonYB8klnm4XkpQIeHdO04ciZxQVUAy48s4cEmRIXzH1/RSCzdf U1xTE0VPKDC7Um3xzn/FLmQN5/Tvc48= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-424-xQ4QPppBN6eJpVP0WyM45A-1; Tue, 01 Mar 2022 13:28:19 -0500 X-MC-Unique: xQ4QPppBN6eJpVP0WyM45A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 31C40801DDB; Tue, 1 Mar 2022 18:28:16 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89FA086C51; Tue, 1 Mar 2022 18:27:53 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 07/11] KVM: x86: SVM: remove avic's broken code that updated APIC ID Date: Tue, 1 Mar 2022 20:26:35 +0200 Message-Id: <20220301182639.559568-8-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that KVM doesn't allow to change APIC ID in case AVIC is enabled, remove buggy AVIC code that tried to do so. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/avic.c | 35 ----------------------------------- 1 file changed, 35 deletions(-) diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index d5ce0868c5a74..90f106d4af45e 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -441,35 +441,6 @@ static int avic_handle_ldr_update(struct kvm_vcpu *vcpu) return ret; } -static int avic_handle_apic_id_update(struct kvm_vcpu *vcpu) -{ - u64 *old, *new; - struct vcpu_svm *svm = to_svm(vcpu); - u32 id = kvm_xapic_id(vcpu->arch.apic); - - if (vcpu->vcpu_id == id) - return 0; - - old = avic_get_physical_id_entry(vcpu, vcpu->vcpu_id); - new = avic_get_physical_id_entry(vcpu, id); - if (!new || !old) - return 1; - - /* We need to move physical_id_entry to new offset */ - *new = *old; - *old = 0ULL; - to_svm(vcpu)->avic_physical_id_cache = new; - - /* - * Also update the guest physical APIC ID in the logical - * APIC ID table entry if already setup the LDR. - */ - if (svm->ldr_reg) - avic_handle_ldr_update(vcpu); - - return 0; -} - static void avic_handle_dfr_update(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -488,10 +459,6 @@ static int avic_unaccel_trap_write(struct kvm_vcpu *vcpu) AVIC_UNACCEL_ACCESS_OFFSET_MASK; switch (offset) { - case APIC_ID: - if (avic_handle_apic_id_update(vcpu)) - return 0; - break; case APIC_LDR: if (avic_handle_ldr_update(vcpu)) return 0; @@ -583,8 +550,6 @@ int avic_init_vcpu(struct vcpu_svm *svm) void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu) { - if (avic_handle_apic_id_update(vcpu) != 0) - return; avic_handle_dfr_update(vcpu); avic_handle_ldr_update(vcpu); } From patchwork Tue Mar 1 18:26:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 916B1C433EF for ; Tue, 1 Mar 2022 18:28:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C5A0B10E79C; Tue, 1 Mar 2022 18:28:31 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id DF4EA10E79C for ; Tue, 1 Mar 2022 18:28:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159310; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PCDXN8uVeWOuUFeyGQEkeCI1l4GKs1v4j1S1C45OtjU=; b=XYKcfgKiy9XUeChylfSxO3IMe8N16G7DVvOFfxyLuf3XeIJ7YG14OeQ4iP6JGAlsXGz9pv EhXuqztgnJT9P2xjkETx6WtdVdW80RWXvd7iWtnuKcIZts1xX0zeQwNZVRXYICv+ba/TmN ER3d526KYyUKJ+fqAnvBXrnf2d3MI+o= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-497-ZPCHC60wNiuHsEf4z4wzZw-1; Tue, 01 Mar 2022 13:28:26 -0500 X-MC-Unique: ZPCHC60wNiuHsEf4z4wzZw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5340C1006AA6; Tue, 1 Mar 2022 18:28:23 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id A0E9D86C41; Tue, 1 Mar 2022 18:28:16 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 08/11] KVM: x86: SVM: move avic state to separate struct Date: Tue, 1 Mar 2022 20:26:36 +0200 Message-Id: <20220301182639.559568-9-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This will make the code a bit easier to read when nested AVIC support is added. No functional change intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/avic.c | 49 +++++++++++++++++++++++------------------ arch/x86/kvm/svm/svm.h | 14 +++++++----- 2 files changed, 36 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index 90f106d4af45e..406cdb63646e0 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -69,6 +69,8 @@ int avic_ga_log_notifier(u32 ga_tag) unsigned long flags; struct kvm_svm *kvm_svm; struct kvm_vcpu *vcpu = NULL; + struct kvm_svm_avic *avic; + u32 vm_id = AVIC_GATAG_TO_VMID(ga_tag); u32 vcpu_id = AVIC_GATAG_TO_VCPUID(ga_tag); @@ -76,9 +78,13 @@ int avic_ga_log_notifier(u32 ga_tag) trace_kvm_avic_ga_log(vm_id, vcpu_id); spin_lock_irqsave(&svm_vm_data_hash_lock, flags); - hash_for_each_possible(svm_vm_data_hash, kvm_svm, hnode, vm_id) { - if (kvm_svm->avic_vm_id != vm_id) + hash_for_each_possible(svm_vm_data_hash, avic, hnode, vm_id) { + + + if (avic->vm_id != vm_id) continue; + + kvm_svm = container_of(avic, struct kvm_svm, avic); vcpu = kvm_get_vcpu_by_id(&kvm_svm->kvm, vcpu_id); break; } @@ -98,18 +104,18 @@ int avic_ga_log_notifier(u32 ga_tag) void avic_vm_destroy(struct kvm *kvm) { unsigned long flags; - struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + struct kvm_svm_avic *avic = &to_kvm_svm(kvm)->avic; if (!enable_apicv) return; - if (kvm_svm->avic_logical_id_table_page) - __free_page(kvm_svm->avic_logical_id_table_page); - if (kvm_svm->avic_physical_id_table_page) - __free_page(kvm_svm->avic_physical_id_table_page); + if (avic->logical_id_table_page) + __free_page(avic->logical_id_table_page); + if (avic->physical_id_table_page) + __free_page(avic->physical_id_table_page); spin_lock_irqsave(&svm_vm_data_hash_lock, flags); - hash_del(&kvm_svm->hnode); + hash_del(&avic->hnode); spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags); } @@ -117,10 +123,9 @@ int avic_vm_init(struct kvm *kvm) { unsigned long flags; int err = -ENOMEM; - struct kvm_svm *kvm_svm = to_kvm_svm(kvm); - struct kvm_svm *k2; struct page *p_page; struct page *l_page; + struct kvm_svm_avic *avic = &to_kvm_svm(kvm)->avic; u32 vm_id; if (!enable_apicv) @@ -131,14 +136,14 @@ int avic_vm_init(struct kvm *kvm) if (!p_page) goto free_avic; - kvm_svm->avic_physical_id_table_page = p_page; + avic->physical_id_table_page = p_page; /* Allocating logical APIC ID table (4KB) */ l_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!l_page) goto free_avic; - kvm_svm->avic_logical_id_table_page = l_page; + avic->logical_id_table_page = l_page; spin_lock_irqsave(&svm_vm_data_hash_lock, flags); again: @@ -149,13 +154,15 @@ int avic_vm_init(struct kvm *kvm) } /* Is it still in use? Only possible if wrapped at least once */ if (next_vm_id_wrapped) { - hash_for_each_possible(svm_vm_data_hash, k2, hnode, vm_id) { - if (k2->avic_vm_id == vm_id) + struct kvm_svm_avic *avic2; + + hash_for_each_possible(svm_vm_data_hash, avic2, hnode, vm_id) { + if (avic2->vm_id == vm_id) goto again; } } - kvm_svm->avic_vm_id = vm_id; - hash_add(svm_vm_data_hash, &kvm_svm->hnode, kvm_svm->avic_vm_id); + avic->vm_id = vm_id; + hash_add(svm_vm_data_hash, &avic->hnode, avic->vm_id); spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags); return 0; @@ -170,8 +177,8 @@ void avic_init_vmcb(struct vcpu_svm *svm) struct vmcb *vmcb = svm->vmcb; struct kvm_svm *kvm_svm = to_kvm_svm(svm->vcpu.kvm); phys_addr_t bpa = __sme_set(page_to_phys(svm->avic_backing_page)); - phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic_logical_id_table_page)); - phys_addr_t ppa = __sme_set(page_to_phys(kvm_svm->avic_physical_id_table_page)); + phys_addr_t lpa = __sme_set(page_to_phys(kvm_svm->avic.logical_id_table_page)); + phys_addr_t ppa = __sme_set(page_to_phys(kvm_svm->avic.physical_id_table_page)); vmcb->control.avic_backing_page = bpa & AVIC_HPA_MASK; vmcb->control.avic_logical_id = lpa & AVIC_HPA_MASK; @@ -194,7 +201,7 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, if (index >= AVIC_MAX_PHYSICAL_ID_COUNT) return NULL; - avic_physical_id_table = page_address(kvm_svm->avic_physical_id_table_page); + avic_physical_id_table = page_address(kvm_svm->avic.physical_id_table_page); return &avic_physical_id_table[index]; } @@ -386,7 +393,7 @@ static u32 *avic_get_logical_id_entry(struct kvm_vcpu *vcpu, u32 ldr, bool flat) index = (cluster << 2) + apic; } - logical_apic_id_table = (u32 *) page_address(kvm_svm->avic_logical_id_table_page); + logical_apic_id_table = (u32 *) page_address(kvm_svm->avic.logical_id_table_page); return &logical_apic_id_table[index]; } @@ -762,7 +769,7 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq, /* Try to enable guest_mode in IRTE */ pi.base = __sme_set(page_to_phys(svm->avic_backing_page) & AVIC_HPA_MASK); - pi.ga_tag = AVIC_GATAG(to_kvm_svm(kvm)->avic_vm_id, + pi.ga_tag = AVIC_GATAG(to_kvm_svm(kvm)->avic.vm_id, svm->vcpu.vcpu_id); pi.is_guest_mode = true; pi.vcpu_data = &vcpu_info; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 3ef2681244e84..469d9fc6e5f15 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -85,15 +85,17 @@ struct kvm_sev_info { atomic_t migration_in_progress; }; -struct kvm_svm { - struct kvm kvm; - /* Struct members for AVIC */ - u32 avic_vm_id; - struct page *avic_logical_id_table_page; - struct page *avic_physical_id_table_page; +struct kvm_svm_avic { + u32 vm_id; + struct page *logical_id_table_page; + struct page *physical_id_table_page; struct hlist_node hnode; +}; +struct kvm_svm { + struct kvm kvm; + struct kvm_svm_avic avic; struct kvm_sev_info sev_info; }; From patchwork Tue Mar 1 18:26:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 15558C433FE for ; Tue, 1 Mar 2022 18:28:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CC52410E7B4; Tue, 1 Mar 2022 18:28:38 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5612D10E7B4 for ; Tue, 1 Mar 2022 18:28:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159316; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xHLTg2ZynLQCsdLNoPMgFEBXTh+HTakHjwxMKQ+OnB8=; b=MPatA33FFi2aDjFYn6EhDHbCJoSxbPkMJP1dqrmwt35CnbpnPin/YNS5krTYyVYgxUAjlX 1ErHeU9jp0QD4D6vP/aH8crktIKvu3pSnvNBGa+uhgmPZdV7gQrCxRylC3pkHMI/GYisOl cTvW3kMmvA9DWZ02G7bCr6Dxe68snOc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-528-oNTXenyBNvGXwAFj1JC4KA-1; Tue, 01 Mar 2022 13:28:33 -0500 X-MC-Unique: oNTXenyBNvGXwAFj1JC4KA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F8CA835DE0; Tue, 1 Mar 2022 18:28:30 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE20D86C41; Tue, 1 Mar 2022 18:28:23 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 09/11] KVM: x86: rename .set_apic_access_page_addr to reload_apic_access_page Date: Tue, 1 Mar 2022 20:26:37 +0200 Message-Id: <20220301182639.559568-10-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This will be used on SVM to reload shadow page of the AVIC physid table No functional change intended Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 2 +- arch/x86/include/asm/kvm_host.h | 3 +-- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/x86.c | 6 +++--- 4 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index eb16e32117610..6473b61d241e2 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -82,7 +82,7 @@ KVM_X86_OP_OPTIONAL(hwapic_isr_update) KVM_X86_OP_OPTIONAL_RET0(guest_apic_has_interrupt) KVM_X86_OP_OPTIONAL(load_eoi_exitmap) KVM_X86_OP_OPTIONAL(set_virtual_apic_mode) -KVM_X86_OP_OPTIONAL(set_apic_access_page_addr) +KVM_X86_OP_OPTIONAL(reload_apic_pages) KVM_X86_OP(deliver_interrupt) KVM_X86_OP_OPTIONAL(sync_pir_to_irr) KVM_X86_OP_OPTIONAL_RET0(set_tss_addr) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 83f734e201e24..c73f8415533a6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1403,7 +1403,7 @@ struct kvm_x86_ops { bool (*guest_apic_has_interrupt)(struct kvm_vcpu *vcpu); void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); - void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu); + void (*reload_apic_pages)(struct kvm_vcpu *vcpu); void (*deliver_interrupt)(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector); int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); @@ -1877,7 +1877,6 @@ int kvm_cpu_has_extint(struct kvm_vcpu *v); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); - int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, unsigned long ipi_bitmap_high, u32 min, unsigned long icr, int op_64_bit); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b325f99b21774..4a9a4785b55e4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6353,7 +6353,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu) vmx_update_msr_bitmap_x2apic(vcpu); } -static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) +static void vmx_reload_apic_access_page(struct kvm_vcpu *vcpu) { struct page *page; @@ -7778,7 +7778,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .enable_irq_window = vmx_enable_irq_window, .update_cr8_intercept = vmx_update_cr8_intercept, .set_virtual_apic_mode = vmx_set_virtual_apic_mode, - .set_apic_access_page_addr = vmx_set_apic_access_page_addr, + .reload_apic_pages = vmx_reload_apic_access_page, .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl, .load_eoi_exitmap = vmx_load_eoi_exitmap, .apicv_post_state_restore = vmx_apicv_post_state_restore, @@ -7942,12 +7942,12 @@ static __init int hardware_setup(void) enable_vnmi = 0; /* - * set_apic_access_page_addr() is used to reload apic access + * kvm_vcpu_reload_apic_pages() is used to reload apic access * page upon invalidation. No need to do anything if not * using the APIC_ACCESS_ADDR VMCS field. */ if (!flexpriority_enabled) - vmx_x86_ops.set_apic_access_page_addr = NULL; + vmx_x86_ops.reload_apic_pages = NULL; if (!cpu_has_vmx_tpr_shadow()) vmx_x86_ops.update_cr8_intercept = NULL; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 14b964eb079e7..1a6cfc27c3b35 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9824,12 +9824,12 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD); } -static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reload_apic_pages(struct kvm_vcpu *vcpu) { if (!lapic_in_kernel(vcpu)) return; - static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); + static_call_cond(kvm_x86_reload_apic_pages)(vcpu); } void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) @@ -9945,7 +9945,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu)) vcpu_load_eoi_exitmap(vcpu); if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu)) - kvm_vcpu_reload_apic_access_page(vcpu); + kvm_vcpu_reload_apic_pages(vcpu); if (kvm_check_request(KVM_REQ_HV_CRASH, vcpu)) { vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT; vcpu->run->system_event.type = KVM_SYSTEM_EVENT_CRASH; From patchwork Tue Mar 1 18:26:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 365ACC433FE for ; Tue, 1 Mar 2022 18:28:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 65D6A10E813; Tue, 1 Mar 2022 18:28:46 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 77B6610E750 for ; Tue, 1 Mar 2022 18:28:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NQ5F8o9t5z4ZcXInvkxYIfp3g+lZYYUXl20ucNkiV/w=; b=GsACPTqRr+tCsgvpwVYaNi6o2m6EiDIP5XxiVsCBazeDE/3fOTx+13qQb6P8AclIgZWg3F qAtGtzprc7n/o4Yc/dYtVOpTUDb2flGc1+gshnOtZtz1vOReA49JvDjuX60tMtxVSohzzk W44zPPzfTX2sOrTedmwYiQG+3CXckfM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-37-QMX9uoGjOr2nHGcZZobwDA-1; Tue, 01 Mar 2022 13:28:41 -0500 X-MC-Unique: QMX9uoGjOr2nHGcZZobwDA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 76F7C835DE2; Tue, 1 Mar 2022 18:28:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB42D86C41; Tue, 1 Mar 2022 18:28:30 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 10/11] KVM: nSVM: implement support for nested AVIC Date: Tue, 1 Mar 2022 20:26:38 +0200 Message-Id: <20220301182639.559568-11-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This implements initial support of using the AVIC in a nested guest Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/svm.h | 8 +- arch/x86/kvm/svm/avic.c | 640 ++++++++++++++++++++++++++++++++++++- arch/x86/kvm/svm/nested.c | 127 +++++++- arch/x86/kvm/svm/svm.c | 25 ++ arch/x86/kvm/svm/svm.h | 133 ++++++++ arch/x86/kvm/trace.h | 164 +++++++++- arch/x86/kvm/x86.c | 10 + 7 files changed, 1096 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index bb2fb78523cee..634c0b80a9dd2 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -222,17 +222,19 @@ struct __attribute__ ((__packed__)) vmcb_control_area { /* AVIC */ -#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFF) +#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFFULL) #define AVIC_LOGICAL_ID_ENTRY_VALID_BIT 31 #define AVIC_LOGICAL_ID_ENTRY_VALID_MASK (1 << 31) +/* TODO: support > 254 L1 APIC ID */ #define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK (0xFFULL) #define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK (0xFFFFFFFFFFULL << 12) #define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK (1ULL << 62) #define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK (1ULL << 63) -#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK (0xFF) +#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK (0xFFULL) -#define AVIC_DOORBELL_PHYSICAL_ID_MASK (0xFF) +/* TODO: support > 254 L1 APIC ID */ +#define AVIC_DOORBELL_PHYSICAL_ID_MASK (0xFFULL) #define AVIC_UNACCEL_ACCESS_WRITE_MASK 1 #define AVIC_UNACCEL_ACCESS_OFFSET_MASK 0xFF0 diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index 406cdb63646e0..dd13fd3588e2b 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -51,6 +51,423 @@ static u32 next_vm_id = 0; static bool next_vm_id_wrapped = 0; static DEFINE_SPINLOCK(svm_vm_data_hash_lock); + +static inline struct kvm_vcpu *avic_vcpu_by_l1_apicid(struct kvm *kvm, + int l1_apicid) +{ + WARN_ON(l1_apicid == -1); + return kvm_get_vcpu_by_id(kvm, l1_apicid); +} + +static void avic_physid_shadow_entry_update_cpu(struct kvm *kvm, + struct avic_physid_table *t, + int n, + int l1_apicid) +{ + struct avic_physid_entry_descr *e = &t->entries[n]; + u64 sentry = READ_ONCE(*e->sentry); + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + struct kvm_vcpu *new_vcpu = NULL; + int l0_apicid; + unsigned long flags; + + raw_spin_lock_irqsave(&kvm_svm->avic.table_entries_lock, flags); + + if (!list_empty(&e->link)) + list_del_init(&e->link); + + if (l1_apicid != -1) + new_vcpu = avic_vcpu_by_l1_apicid(kvm, l1_apicid); + + if (new_vcpu) + list_add_tail(&e->link, &to_svm(new_vcpu)->nested.physid_ref_entries); + + /* update the shadow entry */ + sentry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; + if (new_vcpu && to_svm(new_vcpu)->loaded) { + l0_apicid = kvm_cpu_get_apicid(new_vcpu->cpu); + physid_entry_set_apicid(&sentry, l0_apicid); + } + WRITE_ONCE(*e->sentry, sentry); + raw_spin_unlock_irqrestore(&kvm_svm->avic.table_entries_lock, flags); +} + +static void avic_physid_shadow_entry_erase(struct kvm *kvm, + struct avic_physid_table *t, + int n) +{ + struct avic_physid_entry_descr *e = &t->entries[n]; + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + unsigned long old_hpa; + unsigned long flags; + + raw_spin_lock_irqsave(&kvm_svm->avic.table_entries_lock, flags); + + if (!test_and_clear_bit(n, t->valid_entires)) + WARN_ON(1); + + /* Release the old APIC backing page */ + old_hpa = physid_entry_get_backing_table(*e->sentry); + kvm_release_pfn_dirty(old_hpa >> PAGE_SHIFT); + + list_del_init(&e->link); + WRITE_ONCE(e->gentry, 0); + WRITE_ONCE(*e->sentry, 0); + + raw_spin_unlock_irqrestore(&kvm_svm->avic.table_entries_lock, flags); +} + +static void avic_physid_shadow_entry_create(struct kvm *kvm, + struct avic_physid_table *t, + int n, + u64 gentry) +{ + struct avic_physid_entry_descr *e = &t->entries[n]; + struct page *backing_page = NULL; + u64 sentry = 0; + + u64 backing_page_gpa = physid_entry_get_backing_table(gentry); + int l1_apic_id = physid_entry_get_apicid(gentry); + + if (backing_page_gpa == INVALID_BACKING_PAGE) + return; + + backing_page = gfn_to_page(kvm, gpa_to_gfn(backing_page_gpa)); + if (is_error_page(backing_page)) { + /* + * Invalid GPA in the guest entry - ignore the entry + * as if it was not present + */ + return; + } + + physid_entry_set_backing_table(&sentry, page_to_phys(backing_page)); + e->gentry = gentry; + WRITE_ONCE(*e->sentry, sentry); + + if (test_and_set_bit(n, t->valid_entires)) + WARN_ON(1); + + avic_physid_shadow_entry_update_cpu(kvm, t, n, l1_apic_id); +} + +void avic_physid_shadow_table_update_vcpu_location(struct kvm_vcpu *vcpu, int cpu) +{ + /* + * Update all entries in the shadow PID tables which address this + * vCPU with its new location + */ + struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); + struct vcpu_svm *vcpu_svm = to_svm(vcpu); + struct avic_physid_entry_descr *e; + int nentries = 0; + unsigned long flags; + + raw_spin_lock_irqsave(&kvm_svm->avic.table_entries_lock, flags); + + list_for_each_entry(e, &vcpu_svm->nested.physid_ref_entries, link) { + u64 sentry = READ_ONCE(*e->sentry); + + physid_entry_set_apicid(&sentry, cpu); + WRITE_ONCE(*e->sentry, sentry); + nentries++; + } + + trace_kvm_avic_physid_update_vcpu(vcpu->vcpu_id, cpu, nentries); + raw_spin_unlock_irqrestore(&kvm_svm->avic.table_entries_lock, flags); +} + +static bool +avic_physid_shadow_table_setup_write_tracking(struct kvm *kvm, + struct avic_physid_table *t, + bool enable) +{ + struct kvm_memory_slot *slot; + + write_lock(&kvm->mmu_lock); + slot = gfn_to_memslot(kvm, t->gfn); + if (!slot) { + write_unlock(&kvm->mmu_lock); + return false; + } + + if (enable) + kvm_slot_page_track_add_page(kvm, slot, t->gfn, KVM_PAGE_TRACK_WRITE); + else + kvm_slot_page_track_remove_page(kvm, slot, t->gfn, KVM_PAGE_TRACK_WRITE); + write_unlock(&kvm->mmu_lock); + return true; +} + +static void +avic_physid_shadow_table_erase(struct kvm *kvm, struct avic_physid_table *t) +{ + int i; + + t->nentries = 0; + for_each_set_bit(i, t->valid_entires, AVIC_MAX_PHYSICAL_ID_COUNT) + avic_physid_shadow_entry_erase(kvm, t, i); +} + +static struct avic_physid_table * +avic_physid_shadow_table_alloc(struct kvm *kvm, gfn_t gfn) +{ + struct avic_physid_entry_descr *e; + struct avic_physid_table *t; + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + u64 *shadow_table_address; + int i; + + if (kvm_page_track_write_tracking_enable(kvm)) + return NULL; + + lockdep_assert_held(&kvm_svm->avic.tables_lock); + + t = kzalloc(sizeof(*t), GFP_KERNEL_ACCOUNT); + if (!t) + return NULL; + + t->shadow_table = alloc_page(GFP_KERNEL_ACCOUNT|__GFP_ZERO); + if (!t->shadow_table) + goto err_free_table; + + shadow_table_address = page_address(t->shadow_table); + t->shadow_table_hpa = __sme_set(page_to_phys(t->shadow_table)); + + for (i = 0; i < ARRAY_SIZE(t->entries); i++) { + e = &t->entries[i]; + e->sentry = &shadow_table_address[i]; + e->gentry = 0; + INIT_LIST_HEAD(&e->link); + } + + t->gfn = gfn; + t->refcount = 1; + avic_physid_shadow_table_setup_write_tracking(kvm, t, true); + list_add_tail(&t->link, &kvm_svm->avic.physid_tables); + return t; + +err_free_table: + kfree(t); + return NULL; +} + +static void +avic_physid_shadow_table_free(struct kvm *kvm, struct avic_physid_table *t) +{ + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + + lockdep_assert_held(&kvm_svm->avic.tables_lock); + + WARN_ON(t->refcount); + avic_physid_shadow_table_setup_write_tracking(kvm, t, false); + + avic_physid_shadow_table_erase(kvm, t); + + hlist_del(&t->hash_link); + list_del(&t->link); + __free_page(t->shadow_table); + kfree(t); +} + +static struct avic_physid_table * +__avic_physid_shadow_table_get(struct hlist_head *head, gfn_t gfn) +{ + struct avic_physid_table *t; + + hlist_for_each_entry(t, head, hash_link) + if (t->gfn == gfn) { + t->refcount++; + return t; + } + return NULL; +} + +struct avic_physid_table * +avic_physid_shadow_table_get(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); + struct hlist_head *hlist; + struct avic_physid_table *t; + + mutex_lock(&kvm_svm->avic.tables_lock); + + hlist = &kvm_svm->avic.physid_gpa_hash[avic_physid_hash(gfn)]; + t = __avic_physid_shadow_table_get(hlist, gfn); + if (!t) { + t = avic_physid_shadow_table_alloc(vcpu->kvm, gfn); + if (!t) + goto out_unlock; + hlist_add_head(&t->hash_link, hlist); + } +out_unlock: + mutex_unlock(&kvm_svm->avic.tables_lock); + return t; +} + +static void +__avic_physid_shadow_table_put(struct kvm *kvm, struct avic_physid_table *t) +{ + WARN_ON(t->refcount <= 0); + if (--t->refcount == 0) + avic_physid_shadow_table_free(kvm, t); +} + +void avic_physid_shadow_table_put(struct kvm *kvm, struct avic_physid_table *t) +{ + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + + mutex_lock(&kvm_svm->avic.tables_lock); + __avic_physid_shadow_table_put(kvm, t); + mutex_unlock(&kvm_svm->avic.tables_lock); +} + +static void avic_physid_shadow_table_reload(struct kvm *kvm, struct avic_physid_table *t) +{ + trace_kvm_avic_physid_shadow_table_reload(gfn_to_gpa(t->gfn)); + t->nentries = 0; + kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD); +} + +static void avic_physid_shadow_table_track_write(struct kvm_vcpu *vcpu, + gpa_t gpa, + const u8 *new, + int bytes, + struct kvm_page_track_notifier_node *node) +{ + struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); + struct hlist_head *hlist; + struct avic_physid_table *t; + gfn_t gfn = gpa_to_gfn(gpa); + unsigned int page_offset = offset_in_page(gpa); + unsigned int entry_offset = page_offset & 0x7; + int first = page_offset / sizeof(u64); + int last = (page_offset + bytes - 1) / sizeof(u64); + u64 new_entry, old_entry; + int l1_apic_id; + + if (WARN_ON_ONCE(bytes == 0)) + return; + + mutex_lock(&kvm_svm->avic.tables_lock); + + hlist = &kvm_svm->avic.physid_gpa_hash[avic_physid_hash(gfn)]; + t = __avic_physid_shadow_table_get(hlist, gfn); + + if (!t) + goto out_unlock; + + trace_kvm_avic_physid_shadow_table_write(gpa, bytes); + + /* writes outside known entries are ignored */ + if (first >= t->nentries) + goto out_table_put; + + /* more that one entry write - invalidate */ + if (first != last) + goto invalidate; + + /* update the entry with written bytes */ + old_entry = t->entries[first].gentry; + new_entry = old_entry; + memcpy(((u8 *)&new_entry) + entry_offset, new, bytes); + + /* if backing page changed, invalidate the whole page*/ + if (physid_entry_get_backing_table(old_entry) != + physid_entry_get_backing_table(new_entry)) + goto invalidate; + + /* Update the backing cpu */ + l1_apic_id = physid_entry_get_apicid(new_entry); + avic_physid_shadow_entry_update_cpu(vcpu->kvm, t, first, l1_apic_id); + t->entries[first].gentry = new_entry; + goto out_table_put; +invalidate: + avic_physid_shadow_table_reload(vcpu->kvm, t); +out_table_put: + __avic_physid_shadow_table_put(vcpu->kvm, t); +out_unlock: + mutex_unlock(&kvm_svm->avic.tables_lock); +} + +static void avic_physid_shadow_table_flush_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_page_track_notifier_node *node) +{ + struct kvm_svm *kvm_svm = to_kvm_svm(kvm); + struct avic_physid_table *t, *n; + int i; + + mutex_lock(&kvm_svm->avic.tables_lock); + + list_for_each_entry_safe(t, n, &kvm_svm->avic.physid_tables, link) { + + if (gfn_in_memslot(slot, t->gfn)) { + avic_physid_shadow_table_reload(kvm, t); + continue; + } + + for_each_set_bit(i, t->valid_entires, AVIC_MAX_PHYSICAL_ID_COUNT) { + u64 gentry = t->entries[i].gentry; + gpa_t gpa = physid_entry_get_backing_table(gentry); + + if (gfn_in_memslot(slot, gpa_to_gfn(gpa))) { + avic_physid_shadow_table_reload(kvm, t); + break; + } + } + } + mutex_unlock(&kvm_svm->avic.tables_lock); +} + +void avic_reload_apic_pages(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *vcpu_svm = to_svm(vcpu); + struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); + struct avic_physid_table *t; + u64 *gentries; + struct kvm_host_map map; + int nentries; + int i; + + t = vcpu_svm->nested.l2_physical_id_table; + if (!t || !is_guest_mode(vcpu) || !avic_nested_active(vcpu)) + return; + + nentries = vcpu_svm->nested.ctl.avic_physical_id & AVIC_PHYSICAL_ID_TABLE_SIZE_MASK; + + mutex_lock(&kvm_svm->avic.tables_lock); + + trace_kvm_avic_update_physid_table(gfn_to_gpa(t->gfn), t->nentries, nentries); + + avic_physid_shadow_table_erase(vcpu->kvm, t); + + if (kvm_vcpu_map(vcpu, t->gfn, &map)) + goto out_unlock; + + gentries = (u64 *)map.hva; + + for (i = 0 ; i < nentries ; i++) + avic_physid_shadow_entry_create(vcpu->kvm, t, i, gentries[i]); + + t->nentries = nentries; +out_unlock: + kvm_vcpu_unmap(vcpu, &map, false); + mutex_unlock(&kvm_svm->avic.tables_lock); +} + +static u32 nested_avic_get_reg(struct kvm_vcpu *vcpu, int reg_off) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + void *nested_apic_regs = svm->nested.l2_apic_access_page.hva; + + if (WARN_ON_ONCE(!nested_apic_regs)) + return 0; + + return *((u32 *) (nested_apic_regs + reg_off)); +} + /* * This is a wrapper of struct amd_iommu_ir_data. */ @@ -117,6 +534,8 @@ void avic_vm_destroy(struct kvm *kvm) spin_lock_irqsave(&svm_vm_data_hash_lock, flags); hash_del(&avic->hnode); spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags); + + kvm_page_track_unregister_notifier(kvm, &avic->write_tracker); } int avic_vm_init(struct kvm *kvm) @@ -165,6 +584,13 @@ int avic_vm_init(struct kvm *kvm) hash_add(svm_vm_data_hash, &avic->hnode, avic->vm_id); spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags); + raw_spin_lock_init(&avic->table_entries_lock); + mutex_init(&avic->tables_lock); + INIT_LIST_HEAD(&avic->physid_tables); + + avic->write_tracker.track_write = avic_physid_shadow_table_track_write; + avic->write_tracker.track_flush_slot = avic_physid_shadow_table_flush_memslot; + kvm_page_track_register_notifier(kvm, &avic->write_tracker); return 0; free_avic: @@ -317,6 +743,136 @@ static void avic_kick_target_vcpus(struct kvm *kvm, struct kvm_lapic *source, } } +static void +avic_kick_target_vcpu_nested_physical(struct vcpu_svm *svm, int target_l2_apic_id, int *index) +{ + u64 gentry; + int target_l1_apicid; + struct avic_physid_table *t = svm->nested.l2_physical_id_table; + + if (WARN_ON_ONCE(!t)) + return; + + /* + * This shouldn't normally happen as such condition + * should cause AVIC_IPI_FAILURE_INVALID_TARGET vmexit, + * however guest can change the page under us. + */ + if (target_l2_apic_id >= t->nentries) + return; + + gentry = t->entries[target_l2_apic_id].gentry; + + /* Same reasoning as above */ + if (!(gentry & AVIC_PHYSICAL_ID_ENTRY_VALID_MASK)) + return; + + /* + * This races against the guest updating is_running bit. + * Race itself happens on real hardware as well, and the guest + * should use correct means to avoid it. + * TODO: needs memory barriers + */ + + target_l1_apicid = physid_entry_get_apicid(gentry); + + if (target_l1_apicid == -1) { + /* is_running is false, need to vmexit to the guest */ + if (*index == -1) + *index = target_l2_apic_id; + } else { + /* Wake up the target vCPU and hide the VM exit from the guest */ + struct kvm_vcpu *target = avic_vcpu_by_l1_apicid(svm->vcpu.kvm, target_l1_apicid); + + if (target && target != &svm->vcpu) + kvm_vcpu_wake_up(target); + } + + trace_kvm_avic_nested_kick_target_vcpu(svm->vcpu.vcpu_id, + target_l2_apic_id, + target_l1_apicid); +} + +static void +avic_kick_target_vcpus_nested_logical(struct vcpu_svm *svm, unsigned long dest, + int *index) +{ + int logical_id; + u8 cluster = 0; + u64 *logical_id_table = (u64 *)svm->nested.l2_logical_id_table.hva; + + if (WARN_ON_ONCE(!logical_id_table)) + return; + + if (nested_avic_get_reg(&svm->vcpu, APIC_DFR) == APIC_DFR_CLUSTER) { + if (dest >= 0x40) + return; + cluster = dest & 0x3C; + dest &= 0x3; + } + + for_each_set_bit(logical_id, &dest, 8) { + u64 log_gentry = logical_id_table[cluster | logical_id]; + int l2_apicid = logid_get_physid(log_gentry); + + /* Should not happen as in this case AVIC should VM exit + * with 'invalid target' + + * However the guest can change the entry under us, + * thus ignore this case. + */ + if (l2_apicid != -1) + avic_kick_target_vcpu_nested_physical(svm, l2_apicid, index); + } +} + +static void +avic_kick_target_vcpus_nested_broadcast(struct vcpu_svm *svm, int *index) +{ + struct avic_physid_table *t = svm->nested.l2_physical_id_table; + int l2_apicid; + + /* + * This races against guest changing valid bit in the table and/or + * increasing nentries of the table. + * In both cases the race would happen on real hardware as well + * thus there is no need to take locks. + */ + for_each_set_bit(l2_apicid, t->valid_entires, AVIC_MAX_PHYSICAL_ID_COUNT) + avic_kick_target_vcpu_nested_physical(svm, l2_apicid, index); +} + + +static int avic_kick_target_vcpus_nested(struct kvm_vcpu *vcpu, + struct kvm_lapic *source, + u32 icrl, u32 icrh) +{ + struct vcpu_svm *svm = to_svm(vcpu); + int dest = GET_APIC_DEST_FIELD(icrh); + int index = -1; + + trace_kvm_avic_nested_kick_target_vcpus(vcpu->vcpu_id, icrl, icrh); + + switch (icrl & APIC_SHORT_MASK) { + case APIC_DEST_NOSHORT: + if (dest == 0xFF) + avic_kick_target_vcpus_nested_broadcast(svm, &index); + else if (icrl & APIC_DEST_MASK) + avic_kick_target_vcpus_nested_logical(svm, dest, &index); + else + avic_kick_target_vcpu_nested_physical(svm, dest, &index); + break; + case APIC_DEST_ALLINC: + case APIC_DEST_ALLBUT: + avic_kick_target_vcpus_nested_broadcast(svm, &index); + break; + case APIC_DEST_SELF: + break; + } + + return index; +} + int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -324,10 +880,18 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu) u32 icrl = svm->vmcb->control.exit_info_1; u32 id = svm->vmcb->control.exit_info_2 >> 32; u32 index = svm->vmcb->control.exit_info_2 & 0xFF; + int nindex; struct kvm_lapic *apic = vcpu->arch.apic; trace_kvm_avic_incomplete_ipi(vcpu->vcpu_id, icrh, icrl, id, index); + if (is_guest_mode(&svm->vcpu)) { + if (WARN_ON_ONCE(!avic_nested_active(vcpu))) + return 1; + if (WARN_ON_ONCE(!svm->nested.l2_physical_id_table)) + return 1; + } + switch (id) { case AVIC_IPI_FAILURE_INVALID_INT_TYPE: /* @@ -339,23 +903,41 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu) * which case KVM needs to emulate the ICR write as well in * order to clear the BUSY flag. */ + if (is_guest_mode(&svm->vcpu)) { + nested_svm_vmexit(svm); + break; + } + if (icrl & APIC_ICR_BUSY) kvm_apic_write_nodecode(vcpu, APIC_ICR); else kvm_apic_send_ipi(apic, icrl, icrh); + break; case AVIC_IPI_FAILURE_TARGET_NOT_RUNNING: /* * At this point, we expect that the AVIC HW has already * set the appropriate IRR bits on the valid target * vcpus. So, we just need to kick the appropriate vcpu. + * + * If nested we might also need to reflect the VM exit to + * the guest */ - avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh); + if (!is_guest_mode(&svm->vcpu)) { + avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh); + break; + } + + nindex = avic_kick_target_vcpus_nested(vcpu, apic, icrl, icrh); + if (nindex != -1) { + svm->vmcb->control.exit_info_2 = ((u64)id << 32) | nindex; + nested_svm_vmexit(svm); + } break; case AVIC_IPI_FAILURE_INVALID_TARGET: - break; case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE: - WARN_ONCE(1, "Invalid backing page\n"); + if (is_guest_mode(&svm->vcpu)) + nested_svm_vmexit(svm); break; default: pr_err("Unknown IPI interception\n"); @@ -369,6 +951,48 @@ bool avic_has_vcpu_inhibit_condition(struct kvm_vcpu *vcpu) return is_guest_mode(vcpu); } +int avic_emulate_doorbell_write(struct kvm_vcpu *vcpu, u64 data) +{ + int source_l1_apicid = vcpu->vcpu_id; + int target_l1_apicid = data & AVIC_DOORBELL_PHYSICAL_ID_MASK; + bool target_running, target_nested; + struct kvm_vcpu *target; + + if (data & ~AVIC_DOORBELL_PHYSICAL_ID_MASK) + return 1; + + target = avic_vcpu_by_l1_apicid(vcpu->kvm, target_l1_apicid); + if (!target) + /* Guest bug: targeting invalid APIC ID. */ + return 0; + + target_running = READ_ONCE(target->mode) == IN_GUEST_MODE; + target_nested = is_guest_mode(target); + + trace_kvm_avic_nested_emulate_doorbell(source_l1_apicid, target_l1_apicid, + target_nested, target_running); + + /* + * Target is not in nested mode, thus doorbell doesn't affect it + * if it became just now nested now, + * it means that it processed the doorbell on entry + */ + if (!target_nested) + return 0; + + /* + * If the target vCPU is in guest mode, kick the real doorbell. + * Otherwise we need to wake it up in case it is not scheduled to run. + */ + if (target_running) + wrmsr(MSR_AMD64_SVM_AVIC_DOORBELL, + kvm_cpu_get_apicid(READ_ONCE(target->cpu)), 0); + else + kvm_vcpu_wake_up(target); + + return 0; +} + static u32 *avic_get_logical_id_entry(struct kvm_vcpu *vcpu, u32 ldr, bool flat) { struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm); @@ -462,9 +1086,13 @@ static void avic_handle_dfr_update(struct kvm_vcpu *vcpu) static int avic_unaccel_trap_write(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm = to_svm(vcpu); u32 offset = to_svm(vcpu)->vmcb->control.exit_info_1 & AVIC_UNACCEL_ACCESS_OFFSET_MASK; + if (WARN_ON_ONCE(is_guest_mode(&svm->vcpu))) + return 0; + switch (offset) { case APIC_LDR: if (avic_handle_ldr_update(vcpu)) @@ -522,6 +1150,8 @@ int avic_unaccelerated_access_interception(struct kvm_vcpu *vcpu) AVIC_UNACCEL_ACCESS_WRITE_MASK; bool trap = is_avic_unaccelerated_access_trap(offset); + WARN_ON_ONCE(is_guest_mode(&svm->vcpu)); + trace_kvm_avic_unaccelerated_access(vcpu->vcpu_id, offset, trap, write, vector); if (trap) { @@ -970,3 +1600,7 @@ void avic_vcpu_unblocking(struct kvm_vcpu *vcpu) put_cpu(); } + +/* + * TODO: Deal with AVIC errata in regard to flushing TLB on vCPU change + */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 6dffa6c661493..2bbd9b1f35cab 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -359,6 +359,14 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu, memcpy(to->reserved_sw, from->reserved_sw, sizeof(struct hv_enlightenments)); } + + /* copy avic related settings only when it is enabled */ + if (from->int_ctl & AVIC_ENABLE_MASK) { + to->avic_vapic_bar = from->avic_vapic_bar; + to->avic_backing_page = from->avic_backing_page; + to->avic_logical_id = from->avic_logical_id; + to->avic_physical_id = from->avic_physical_id; + } } void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm, @@ -507,6 +515,75 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm) svm->nested.vmcb02.ptr->save.g_pat = svm->vmcb01.ptr->save.g_pat; } + +static bool nested_vmcb02_prepare_avic(struct vcpu_svm *svm) +{ + struct vmcb *vmcb02 = svm->nested.vmcb02.ptr; + struct avic_physid_table *t = svm->nested.l2_physical_id_table; + gfn_t physid_gfn; + int physid_nentries; + + if (!avic_nested_active(&svm->vcpu)) + return true; + + /* + * TODO Check that GPA of all pages is valid here, + * and #VMEXIT with avic specific VMexit if not + */ + + if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.ctl.avic_backing_page & AVIC_HPA_MASK), + &svm->nested.l2_apic_access_page)) + goto error; + + if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.ctl.avic_logical_id & AVIC_HPA_MASK), + &svm->nested.l2_logical_id_table)) + goto error_unmap_backing_page; + + physid_gfn = gpa_to_gfn(svm->nested.ctl.avic_physical_id & + AVIC_HPA_MASK); + physid_nentries = svm->nested.ctl.avic_physical_id & + AVIC_PHYSICAL_ID_TABLE_SIZE_MASK; + + if (t && t->gfn != physid_gfn) { + avic_physid_shadow_table_put(svm->vcpu.kvm, t); + svm->nested.l2_physical_id_table = NULL; + } + + if (!svm->nested.l2_physical_id_table) { + t = avic_physid_shadow_table_get(&svm->vcpu, physid_gfn); + if (!t) + goto error_unmap_logical_id_table; + svm->nested.l2_physical_id_table = t; + } + + if (t->nentries < physid_nentries) + kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, &svm->vcpu); + + /* Everything is setup, we can enable AVIC */ + + vmcb02->control.avic_vapic_bar = + svm->nested.ctl.avic_vapic_bar & VMCB_AVIC_APIC_BAR_MASK; + vmcb02->control.avic_backing_page = + pfn_to_hpa(svm->nested.l2_apic_access_page.pfn); + vmcb02->control.avic_logical_id = + pfn_to_hpa(svm->nested.l2_logical_id_table.pfn); + vmcb02->control.avic_physical_id = + (svm->nested.l2_physical_id_table->shadow_table_hpa) | physid_nentries; + + vmcb02->control.int_ctl |= AVIC_ENABLE_MASK; + return true; + +error_unmap_logical_id_table: + kvm_vcpu_unmap(&svm->vcpu, &svm->nested.l2_logical_id_table, false); +error_unmap_backing_page: + kvm_vcpu_unmap(&svm->vcpu, &svm->nested.l2_apic_access_page, false); +error: + svm->vcpu.run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + svm->vcpu.run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; + svm->vcpu.run->internal.ndata = 0; + return false; +} + static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12) { bool new_vmcb12 = false; @@ -566,7 +643,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) const u32 int_ctl_vmcb01_bits = V_INTR_MASKING_MASK | V_GIF_MASK | V_GIF_ENABLE_MASK; - const u32 int_ctl_vmcb12_bits = V_TPR_MASK | V_IRQ_INJECTION_BITS_MASK; + u32 int_ctl_vmcb12_bits = V_TPR_MASK | V_IRQ_INJECTION_BITS_MASK; struct kvm_vcpu *vcpu = &svm->vcpu; @@ -575,6 +652,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes. */ + if (avic_nested_active(vcpu)) + int_ctl_vmcb12_bits &= ~V_IRQ_INJECTION_BITS_MASK; /* Copied from vmcb01. msrpm_base can be overwritten later. */ svm->vmcb->control.nested_ctl = svm->vmcb01.ptr->control.nested_ctl; @@ -748,7 +827,10 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) goto out_exit_err; - if (nested_svm_vmrun_msrpm(svm)) + if (!nested_svm_vmrun_msrpm(svm)) + goto out_exit_err; + + if (nested_vmcb02_prepare_avic(svm)) goto out; out_exit_err: @@ -763,7 +845,6 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) out: kvm_vcpu_unmap(vcpu, &map, true); - return ret; } @@ -874,6 +955,11 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_svm_copy_common_state(svm->nested.vmcb02.ptr, svm->vmcb01.ptr); + if (avic_nested_active(vcpu)) { + kvm_vcpu_unmap(vcpu, &svm->nested.l2_apic_access_page, true); + kvm_vcpu_unmap(vcpu, &svm->nested.l2_logical_id_table, true); + } + svm_switch_vmcb(svm, &svm->vmcb01); /* @@ -988,6 +1074,9 @@ int svm_allocate_nested(struct vcpu_svm *svm) void svm_free_nested(struct vcpu_svm *svm) { + struct kvm_vcpu *vcpu = &svm->vcpu; + struct avic_physid_table *t; + if (!svm->nested.initialized) return; @@ -1006,6 +1095,15 @@ void svm_free_nested(struct vcpu_svm *svm) */ svm->nested.last_vmcb12_gpa = INVALID_GPA; + t = svm->nested.l2_physical_id_table; + if (t) { + avic_physid_shadow_table_put(vcpu->kvm, t); + svm->nested.l2_physical_id_table = NULL; + } + + kvm_vcpu_unmap(vcpu, &svm->nested.l2_apic_access_page, true); + kvm_vcpu_unmap(vcpu, &svm->nested.l2_logical_id_table, true); + svm->nested.initialized = false; } @@ -1116,6 +1214,20 @@ static int nested_svm_intercept(struct vcpu_svm *svm) vmexit = NESTED_EXIT_DONE; break; } + case SVM_EXIT_AVIC_UNACCELERATED_ACCESS: { + /* + * Unaccelerated AVIC access is always reflected + * and there is no intercept bit for it + */ + vmexit = NESTED_EXIT_DONE; + break; + } + case SVM_EXIT_AVIC_INCOMPLETE_IPI: + /* + * Doesn't have an intercept bit, host needs to intercept + * and in some cases reflect to the guest + */ + break; default: { if (vmcb12_is_intercept(&svm->nested.ctl, exit_code)) vmexit = NESTED_EXIT_DONE; @@ -1332,6 +1444,13 @@ static void nested_copy_vmcb_cache_to_control(struct vmcb_control_area *dst, dst->pause_filter_count = from->pause_filter_count; dst->pause_filter_thresh = from->pause_filter_thresh; /* 'clean' and 'reserved_sw' are not changed by KVM */ + + if (from->int_ctl & AVIC_ENABLE_MASK) { + dst->avic_vapic_bar = from->avic_vapic_bar; + dst->avic_backing_page = from->avic_backing_page; + dst->avic_logical_id = from->avic_logical_id; + dst->avic_physical_id = from->avic_physical_id; + } } static int svm_get_nested_state(struct kvm_vcpu *vcpu, @@ -1553,7 +1672,7 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) if (CC(!load_pdptrs(vcpu, vcpu->arch.cr3))) return false; - if (!nested_svm_vmrun_msrpm(svm)) { + if (!nested_svm_vmrun_msrpm(svm) || !nested_vmcb02_prepare_avic(svm)) { vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 08ccf0db91f72..0d6b715375a69 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1228,6 +1228,8 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) svm->guest_state_loaded = false; + INIT_LIST_HEAD(&svm->nested.physid_ref_entries); + return 0; error_free_vmsa_page: @@ -1317,15 +1319,29 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) sd->current_vmcb = svm->vmcb; indirect_branch_prediction_barrier(); } + + svm->loaded = true; + if (kvm_vcpu_apicv_active(vcpu)) avic_vcpu_load(vcpu, cpu); + + if (svm->nested.initialized && svm->avic_enabled) + avic_physid_shadow_table_update_vcpu_location(vcpu, cpu); } static void svm_vcpu_put(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm = to_svm(vcpu); + if (kvm_vcpu_apicv_active(vcpu)) avic_vcpu_put(vcpu); + + svm->loaded = false; + + if (svm->nested.initialized && svm->avic_enabled) + avic_physid_shadow_table_update_vcpu_location(vcpu, -1); + svm_prepare_host_switch(vcpu); ++vcpu->stat.host_state_reload; @@ -2705,6 +2721,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) u32 ecx = msr->index; u64 data = msr->data; switch (ecx) { + case MSR_AMD64_SVM_AVIC_DOORBELL: + return avic_emulate_doorbell_write(vcpu, data); case MSR_AMD64_TSC_RATIO: if (!msr->host_initiated && !svm->tsc_scaling_enabled) return 1; @@ -3972,6 +3990,9 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_X2APIC); } + + svm->avic_enabled = enable_apicv && guest_cpuid_has(vcpu, X86_FEATURE_AVIC); + init_vmcb_after_set_cpuid(vcpu); } @@ -4581,6 +4602,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .enable_nmi_window = svm_enable_nmi_window, .enable_irq_window = svm_enable_irq_window, .update_cr8_intercept = svm_update_cr8_intercept, + .reload_apic_pages = avic_reload_apic_pages, .refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl, .check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons, .apicv_post_state_restore = avic_apicv_post_state_restore, @@ -4696,6 +4718,9 @@ static __init void svm_set_cpu_caps(void) if (tsc_scaling) kvm_cpu_cap_set(X86_FEATURE_TSCRATEMSR); + if (enable_apicv) + kvm_cpu_cap_set(X86_FEATURE_AVIC); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 469d9fc6e5f15..8ebda12995abe 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -86,13 +87,34 @@ struct kvm_sev_info { }; +#define AVIC_PHYSID_HASH_SHIFT 8 +#define AVIC_PHYSID_HASH_SIZE (1 << AVIC_PHYSID_HASH_SHIFT) + struct kvm_svm_avic { u32 vm_id; struct page *logical_id_table_page; struct page *physical_id_table_page; struct hlist_node hnode; + + raw_spinlock_t table_entries_lock; + struct mutex tables_lock; + + /* List of all shadow tables */ + struct list_head physid_tables; + + /* GPA hash table to find a shadow table via its GPA */ + struct hlist_head physid_gpa_hash[AVIC_PHYSID_HASH_SIZE]; + + struct kvm_page_track_notifier_node write_tracker; }; + +static __always_inline unsigned int avic_physid_hash(gfn_t gfn) +{ + return hash_64(gfn, AVIC_PHYSID_HASH_SHIFT); +} + + struct kvm_svm { struct kvm kvm; struct kvm_svm_avic avic; @@ -142,6 +164,45 @@ struct vmcb_ctrl_area_cached { u64 virt_ext; u32 clean; u8 reserved_sw[32]; + + u64 avic_vapic_bar; + u64 avic_backing_page; + u64 avic_logical_id; + u64 avic_physical_id; +}; + +struct avic_physid_entry_descr { + struct list_head link; + + /* cached value of guest entry */ + u64 gentry; + + /* shadow table entry pointer*/ + u64 *sentry; +}; + +struct avic_physid_table { + /* List of all tables member */ + struct list_head link; + + /* GPA hash of all tables member */ + struct hlist_node hash_link; + + /* GPA of the table in guest memory*/ + gfn_t gfn; + + /* Number of entries that we shadow and which are valid*/ + int nentries; + DECLARE_BITMAP(valid_entires, AVIC_MAX_PHYSICAL_ID_COUNT); + + struct avic_physid_entry_descr entries[AVIC_MAX_PHYSICAL_ID_COUNT]; + + /* Guest visible shadow table */ + struct page *shadow_table; + hpa_t shadow_table_hpa; + + /* Number of vCPUs which are in nested mode and use this table */ + int refcount; }; struct svm_nested_state { @@ -177,6 +238,13 @@ struct svm_nested_state { * on its side. */ bool force_msr_bitmap_recalc; + + /* All AVIC shadow PID table entry descriptors that refernce this vCPU */ + struct list_head physid_ref_entries; + + struct kvm_host_map l2_apic_access_page; + struct kvm_host_map l2_logical_id_table; + struct avic_physid_table *l2_physical_id_table; }; struct vcpu_sev_es_state { @@ -234,11 +302,13 @@ struct vcpu_svm { /* cached guest cpuid flags for faster access */ bool nrips_enabled : 1; bool tsc_scaling_enabled : 1; + bool avic_enabled : 1; u32 ldr_reg; u32 dfr_reg; struct page *avic_backing_page; u64 *avic_physical_id_cache; + bool loaded; /* * Per-vcpu list of struct amd_svm_iommu_ir: @@ -598,6 +668,69 @@ void avic_vcpu_blocking(struct kvm_vcpu *vcpu); void avic_vcpu_unblocking(struct kvm_vcpu *vcpu); void avic_ring_doorbell(struct kvm_vcpu *vcpu); bool avic_has_vcpu_inhibit_condition(struct kvm_vcpu *vcpu); +int avic_emulate_doorbell_write(struct kvm_vcpu *vcpu, u64 data); +void avic_reload_apic_pages(struct kvm_vcpu *vcpu); + +struct avic_physid_table * +avic_physid_shadow_table_get(struct kvm_vcpu *vcpu, gfn_t gfn); +void avic_physid_shadow_table_put(struct kvm *kvm, struct avic_physid_table *t); + +void avic_physid_shadow_table_update_vcpu_location(struct kvm_vcpu *vcpu, + int cpu); + +static inline bool avic_nested_active(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *vcpu_svm = to_svm(vcpu); + + if (!vcpu_svm->avic_enabled) + return false; + + if (!nested_npt_enabled(vcpu_svm)) + return false; + + return vcpu_svm->nested.ctl.int_ctl & AVIC_ENABLE_MASK; +} + +#define INVALID_BACKING_PAGE (~(u64)0) + +static inline u64 physid_entry_get_backing_table(u64 entry) +{ + if (!(entry & AVIC_PHYSICAL_ID_ENTRY_VALID_MASK)) + return INVALID_BACKING_PAGE; + return entry & AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK; +} + +static inline int physid_entry_get_apicid(u64 entry) +{ + if (!(entry & AVIC_PHYSICAL_ID_ENTRY_VALID_MASK)) + return -1; + if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) + return -1; + + return entry & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK; +} + +static inline int logid_get_physid(u64 entry) +{ + if (!(entry & AVIC_LOGICAL_ID_ENTRY_VALID_BIT)) + return -1; + return entry & AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK; +} + +static inline void physid_entry_set_backing_table(u64 *entry, u64 value) +{ + *entry |= (AVIC_PHYSICAL_ID_ENTRY_VALID_MASK | value); +} + +static inline void physid_entry_set_apicid(u64 *entry, int value) +{ + WARN_ON(!(*entry & AVIC_PHYSICAL_ID_ENTRY_VALID_MASK)); + + if (value == -1) + *entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; + else + *entry |= (AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK | value); +} /* sev.c */ diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index 193f5ba930d12..3d1e6e948461b 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -1383,7 +1383,7 @@ TRACE_EVENT(kvm_apicv_accept_irq, ); /* - * Tracepoint for AMD AVIC + * Tracepoints for AMD AVIC */ TRACE_EVENT(kvm_avic_incomplete_ipi, TP_PROTO(u32 vcpu, u32 icrh, u32 icrl, u32 id, u32 index), @@ -1457,6 +1457,168 @@ TRACE_EVENT(kvm_avic_ga_log, __entry->vmid, __entry->vcpuid) ); +TRACE_EVENT(kvm_avic_update_shadow_entry, + TP_PROTO(u64 gpa, u64 hpa, u64 old_entry, u64 new_entry), + TP_ARGS(gpa, hpa, old_entry, new_entry), + + TP_STRUCT__entry( + __field(u64, gpa) + __field(u64, hpa) + __field(u64, old_entry) + __field(u64, new_entry) + ), + + TP_fast_assign( + __entry->gpa = gpa; + __entry->hpa = hpa; + __entry->old_entry = old_entry; + __entry->new_entry = new_entry; + ), + + TP_printk("gpa 0x%llx hpa 0x%llx entry 0x%llx -> 0x%llx", + __entry->gpa, __entry->hpa, __entry->old_entry, __entry->new_entry) +); + +TRACE_EVENT(kvm_avic_update_physid_table, + TP_PROTO(u64 gpa, int nentries, int new_nentires), + TP_ARGS(gpa, nentries, new_nentires), + + TP_STRUCT__entry( + __field(u64, gpa) + __field(int, nentries) + __field(int, new_nentires) + ), + + TP_fast_assign( + __entry->gpa = gpa; + __entry->nentries = nentries; + __entry->new_nentires = new_nentires; + ), + + TP_printk("table at gpa 0x%llx, nentires %d -> %d", + __entry->gpa, __entry->nentries, __entry->new_nentires) +); + +TRACE_EVENT(kvm_avic_physid_shadow_table_reload, + TP_PROTO(u64 gpa), + TP_ARGS(gpa), + + TP_STRUCT__entry( + __field(u64, gpa) + ), + + TP_fast_assign( + __entry->gpa = gpa; + ), + + TP_printk("gpa 0x%llx", + __entry->gpa) +); + +TRACE_EVENT(kvm_avic_physid_shadow_table_write, + TP_PROTO(u64 gpa, int bytes), + TP_ARGS(gpa, bytes), + + TP_STRUCT__entry( + __field(u64, gpa) + __field(int, bytes) + ), + + TP_fast_assign( + __entry->gpa = gpa; + __entry->bytes = bytes; + ), + + TP_printk("gpa 0x%llx, write of %d bytes", + __entry->gpa, __entry->bytes) +); + +TRACE_EVENT(kvm_avic_physid_update_vcpu, + TP_PROTO(int vcpu_id, int cpu_id, int n), + TP_ARGS(vcpu_id, cpu_id, n), + + TP_STRUCT__entry( + __field(int, vcpu_id) + __field(int, cpu_id) + __field(int, n) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu_id; + __entry->cpu_id = cpu_id; + __entry->n = n; + ), + + TP_printk("vcpu %d cpu %d (%d entries)", + __entry->vcpu_id, __entry->cpu_id, __entry->n) +); + +TRACE_EVENT(kvm_avic_nested_emulate_doorbell, + TP_PROTO(int source_l1_apicid, int target_l1_apicid, bool target_nested, + bool target_running), + TP_ARGS(source_l1_apicid, target_l1_apicid, target_nested, + target_running), + + TP_STRUCT__entry( + __field(int, source_l1_apicid) + __field(int, target_l1_apicid) + __field(bool, target_nested) + __field(bool, target_running) + ), + + TP_fast_assign( + __entry->source_l1_apicid = source_l1_apicid; + __entry->target_l1_apicid = target_l1_apicid; + __entry->target_nested = target_nested; + __entry->target_running = target_running; + ), + + TP_printk("source %d target %d (nested: %d, running %d)", + __entry->source_l1_apicid, __entry->target_l1_apicid, + __entry->target_nested, __entry->target_running) +); + +TRACE_EVENT(kvm_avic_nested_kick_target_vcpu, + TP_PROTO(int source_l1_apic_id, int target_l2_apic_id, int target_l1_apic_id), + TP_ARGS(source_l1_apic_id, target_l2_apic_id, target_l1_apic_id), + + TP_STRUCT__entry( + __field(int, source_l1_apic_id) + __field(int, target_l2_apic_id) + __field(int, target_l1_apic_id) + ), + + TP_fast_assign( + __entry->source_l1_apic_id = source_l1_apic_id; + __entry->target_l2_apic_id = target_l2_apic_id; + __entry->target_l1_apic_id = target_l1_apic_id; + ), + + TP_printk("source l1 apic id: %d target l2 apic id: %d target l1 apic_id: %d", + __entry->source_l1_apic_id, __entry->target_l2_apic_id, + __entry->target_l1_apic_id) +); + +TRACE_EVENT(kvm_avic_nested_kick_target_vcpus, + TP_PROTO(int source_l1_apic_id, u32 icrl, u32 icrh), + TP_ARGS(source_l1_apic_id, icrl, icrh), + + TP_STRUCT__entry( + __field(int, source_l1_apic_id) + __field(u32, icrl) + __field(u32, icrh) + ), + + TP_fast_assign( + __entry->source_l1_apic_id = source_l1_apic_id; + __entry->icrl = icrl; + __entry->icrh = icrh; + ), + + TP_printk("source %d icrl 0x%x icrh 0x%x", + __entry->source_l1_apic_id, __entry->icrl, __entry->icrh) +); + TRACE_EVENT(kvm_hv_timer_state, TP_PROTO(unsigned int vcpu_id, unsigned int hv_timer_in_use), TP_ARGS(vcpu_id, hv_timer_in_use), diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1a6cfc27c3b35..48a1916bc71c7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12909,6 +12909,16 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_pi_irte_update); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_unaccelerated_access); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_incomplete_ipi); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_ga_log); + +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_update_shadow_entry); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_update_physid_table); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_physid_shadow_table_reload); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_physid_shadow_table_write); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_physid_update_vcpu); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_nested_emulate_doorbell); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_nested_kick_target_vcpu); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_nested_kick_target_vcpus); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_update_request); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_accept_irq); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_enter); From patchwork Tue Mar 1 18:26:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12765031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6BF98C433FE for ; Tue, 1 Mar 2022 18:28:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8640710E820; Tue, 1 Mar 2022 18:28:54 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4CDBA10E829 for ; Tue, 1 Mar 2022 18:28:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646159331; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cLntVg1A2GtP5Mk+abzswn9NBODKG9gpF8Mfluysqbk=; b=egaBVInCYuhgUC2jyhzGPRkYEmwBTlXEXb3P+dZOBAfF/YumYXgGXUV3BfSdLncXMI3QY9 7UgS3Mz4r7mn/Ds3MnjzCKuIDCIbiF5lzlUDuqYAvacyqRdWK7hCWjaGlBMD06Q4OjQxmI GoplzgivmlQitzNL1QHPNNBnBYSLhk4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-489-QlxQAoN2PdyIiLAMBUF16g-1; Tue, 01 Mar 2022 13:28:48 -0500 X-MC-Unique: QlxQAoN2PdyIiLAMBUF16g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4792B1854E21; Tue, 1 Mar 2022 18:28:45 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id E207286C41; Tue, 1 Mar 2022 18:28:38 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Subject: [PATCH v3 11/11] KVM: SVM: allow to avoid not needed updates to is_running Date: Tue, 1 Mar 2022 20:26:39 +0200 Message-Id: <20220301182639.559568-12-mlevitsk@redhat.com> In-Reply-To: <20220301182639.559568-1-mlevitsk@redhat.com> References: <20220301182639.559568-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , David Airlie , dri-devel@lists.freedesktop.org, "H. Peter Anvin" , Joerg Roedel , x86@kernel.org, Maxim Levitsky , Ingo Molnar , Zhi Wang , Dave Hansen , intel-gfx@lists.freedesktop.org, Borislav Petkov , Rodrigo Vivi , Thomas Gleixner , intel-gvt-dev@lists.freedesktop.org, Jim Mattson , Tvrtko Ursulin , Sean Christopherson , linux-kernel@vger.kernel.org, Paolo Bonzini , Vitaly Kuznetsov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Allow optionally to make KVM not update is_running unless it is functionally needed which is only when a vCPU halts, or is in the guest mode. This means security wise that if a vCPU is scheduled out, other vCPUs could still send doorbell messages to the last physical CPU where this vCPU was last running. This in theory can be considered less secure, thus this option is not enabled by default. The option is avic_doorbell_strict and is true by default, setting it to false allows this relaxed non strict mode. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/avic.c | 39 +++++++++++++++++++++++++++------------ arch/x86/kvm/svm/svm.c | 7 +++++-- arch/x86/kvm/svm/svm.h | 1 + virt/kvm/kvm_main.c | 3 ++- 4 files changed, 35 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c index dd13fd3588e2b..1d690a9d3952e 100644 --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -166,10 +166,13 @@ void avic_physid_shadow_table_update_vcpu_location(struct kvm_vcpu *vcpu, int cp raw_spin_lock_irqsave(&kvm_svm->avic.table_entries_lock, flags); list_for_each_entry(e, &vcpu_svm->nested.physid_ref_entries, link) { - u64 sentry = READ_ONCE(*e->sentry); + u64 old_sentry = READ_ONCE(*e->sentry); + u64 new_sentry = old_sentry; - physid_entry_set_apicid(&sentry, cpu); - WRITE_ONCE(*e->sentry, sentry); + physid_entry_set_apicid(&new_sentry, cpu); + + if (new_sentry != old_sentry) + WRITE_ONCE(*e->sentry, new_sentry); nentries++; } @@ -1507,7 +1510,7 @@ avic_update_iommu_vcpu_affinity(struct kvm_vcpu *vcpu, int cpu, bool r) void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - u64 entry; + u64 old_entry, new_entry; /* ID = 0xff (broadcast), ID > 0xff (reserved) */ int h_physical_id = kvm_cpu_get_apicid(cpu); struct vcpu_svm *svm = to_svm(vcpu); @@ -1531,14 +1534,16 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (kvm_vcpu_is_blocking(vcpu)) return; - entry = READ_ONCE(*(svm->avic_physical_id_cache)); - WARN_ON(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK); + old_entry = READ_ONCE(*(svm->avic_physical_id_cache)); + new_entry = old_entry; - entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK; - entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK); - entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; + new_entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK; + new_entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK); + new_entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; + + if (old_entry != new_entry) + WRITE_ONCE(*(svm->avic_physical_id_cache), new_entry); - WRITE_ONCE(*(svm->avic_physical_id_cache), entry); avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true); } @@ -1549,14 +1554,24 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu) lockdep_assert_preemption_disabled(); + avic_update_iommu_vcpu_affinity(vcpu, -1, 0); + + /* + * It is only meaningful to intercept IPIs from the guest + * when either vCPU is blocked, or in guest mode. + * In all other cases (e.g userspace vmexit, or preemption + * by other task, the vCPU is guaranteed to return to guest mode + * as soon as it can + */ + if (!avic_doorbell_strict && !kvm_vcpu_is_blocking(vcpu) && !is_guest_mode(vcpu)) + return; + entry = READ_ONCE(*(svm->avic_physical_id_cache)); /* Nothing to do if IsRunning == '0' due to vCPU blocking. */ if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) return; - avic_update_iommu_vcpu_affinity(vcpu, -1, 0); - entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK; WRITE_ONCE(*(svm->avic_physical_id_cache), entry); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0d6b715375a69..463b756f665ae 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -202,6 +202,9 @@ module_param(tsc_scaling, int, 0444); static bool avic; module_param(avic, bool, 0444); +bool avic_doorbell_strict = true; +module_param(avic_doorbell_strict, bool, 0444); + bool __read_mostly dump_invalid_vmcb; module_param(dump_invalid_vmcb, bool, 0644); @@ -1340,7 +1343,8 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) svm->loaded = false; if (svm->nested.initialized && svm->avic_enabled) - avic_physid_shadow_table_update_vcpu_location(vcpu, -1); + if (!avic_doorbell_strict || kvm_vcpu_is_blocking(vcpu)) + avic_physid_shadow_table_update_vcpu_location(vcpu, -1); svm_prepare_host_switch(vcpu); @@ -4707,7 +4711,6 @@ static __init void svm_set_cpu_caps(void) /* CPUID 0x80000001 and 0x8000000A (SVM features) */ if (nested) { kvm_cpu_cap_set(X86_FEATURE_SVM); - kvm_cpu_cap_set(X86_FEATURE_VMCBCLEAN); if (nrips) kvm_cpu_cap_set(X86_FEATURE_NRIPS); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 8ebda12995abe..d0108bae2cdac 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -33,6 +33,7 @@ extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; extern bool npt_enabled; extern bool intercept_smi; +extern bool avic_doorbell_strict; /* * Clean bits in VMCB. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cdf1fa3c60ae2..67a29233e216b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3291,9 +3291,10 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu) vcpu->stat.generic.blocking = 1; + prepare_to_rcuwait(wait); + kvm_arch_vcpu_blocking(vcpu); - prepare_to_rcuwait(wait); for (;;) { set_current_state(TASK_INTERRUPTIBLE);