From patchwork Fri Sep 27 16:16:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Orlov X-Patchwork-Id: 13814441 Received: from smtp-fw-52004.amazon.com (smtp-fw-52004.amazon.com [52.119.213.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 027661C1AAB; Fri, 27 Sep 2024 16:17:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.154 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453832; cv=none; b=UGBDtUnvOz/J2kb+j4fLPSbOiWFxnnA+HGWr7dg9arIKoHiPnkpMs9cStk46HvcDp6Zjk0gI5iXaTa52Vz8KDy8aC5vrRqSAXke9sXAnUNRSVO1PeZl/iGUGTmNBLxFDi5bO+Cf2Mi2BV5xP6ab1rOazclObODMFPH/YWqygbJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453832; c=relaxed/simple; bh=C8WchSJol1OaVHQu0m3ad0zzyr9UWakFROTcb/4bSxU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nJr4U2Y6VyfVUnKOQPQqQrzfABCfMTDoK3yoLmr6xeuZNTjA1Fv9mYXkxoq+lV3QOMXjOaVCY2QOVkR6GhsJs0zmILxrZFrpl8pV1kBtxzK8NHzuPCZzorOh7zFphx8BgFselvgk9BYJ82arRkPuGCu85rISGTyeXIlwFGCzgJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=Jj7HoYk3; arc=none smtp.client-ip=52.119.213.154 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="Jj7HoYk3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1727453831; x=1758989831; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=e15r3Lwtq2zBqsLGY63VMaRcyaWhPKoUWqN0tVNJjQs=; b=Jj7HoYk3CeKbSWDEZktqB2T5Q0ABkTAUxaWDz2te3HO1h9FWat7Q0eZV 2enBj1hu6WozuGW3IqczHevqeQXaYTnZnyCYRJGKRMQ8pZm7tGS84VThO rkJl31p1Fg077OgP8MGTXOz1DIUTOT20NtOsxAMS/7/hGn62ESR3Uk7by g=; X-IronPort-AV: E=Sophos;i="6.11,159,1725321600"; d="scan'208";a="235072729" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.2]) by smtp-border-fw-52004.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2024 16:17:07 +0000 Received: from EX19MTAEUA002.ant.amazon.com [10.0.43.254:52573] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.16.194:2525] with esmtp (Farcaster) id a9d23646-83e6-4dd8-92b3-d0b30d896809; Fri, 27 Sep 2024 16:17:06 +0000 (UTC) X-Farcaster-Flow-ID: a9d23646-83e6-4dd8-92b3-d0b30d896809 Received: from EX19D031EUB003.ant.amazon.com (10.252.61.88) by EX19MTAEUA002.ant.amazon.com (10.252.50.126) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:06 +0000 Received: from EX19MTAUWA002.ant.amazon.com (10.250.64.202) by EX19D031EUB003.ant.amazon.com (10.252.61.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:05 +0000 Received: from email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (10.25.36.210) by mail-relay.amazon.com (10.250.64.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Fri, 27 Sep 2024 16:17:05 +0000 Received: from dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com (dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com [10.253.74.38]) by email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (Postfix) with ESMTPS id A4B8F403DD; Fri, 27 Sep 2024 16:17:03 +0000 (UTC) From: Ivan Orlov To: , , , , , , CC: Ivan Orlov , , , , , , , , Subject: [PATCH 1/3] KVM: x86, vmx: Add function for event delivery error generation Date: Fri, 27 Sep 2024 16:16:55 +0000 Message-ID: <20240927161657.68110-2-iorlov@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240927161657.68110-1-iorlov@amazon.com> References: <20240927161657.68110-1-iorlov@amazon.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Extract KVM_INTERNAL_ERROR_DELIVERY_EV internal error generation into the SVM/VMX-agnostic 'kvm_prepare_ev_delivery_failure_exit' function, as it is done for KVM_INTERNAL_ERROR_EMULATION. The order of internal.data array entries is preserved as is, so it is going to be the same on VMX platforms (vectoring info, full exit reason, exit qualification, GPA if error happened due to MMIO and last_vmentry_cpu of the vcpu). Having it as a separate function will help us to avoid code duplication when handling the MMIO during event delivery error on SVM. No functional change intended. Signed-off-by: Ivan Orlov --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/vmx.c | 15 +++------------ arch/x86/kvm/x86.c | 22 ++++++++++++++++++++++ 3 files changed, 27 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6d9f763a7bb9..348daba424dd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2060,6 +2060,8 @@ void __kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *data, u8 ndata); void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu); +void kvm_prepare_ev_delivery_failure_exit(struct kvm_vcpu *vcpu, gpa_t gpa, bool is_mmio); + void kvm_enable_efer_bits(u64); bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer); int kvm_get_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 *data); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c67e448c6ebd..afd785e7f3a3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6550,19 +6550,10 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) exit_reason.basic != EXIT_REASON_APIC_ACCESS && exit_reason.basic != EXIT_REASON_TASK_SWITCH && exit_reason.basic != EXIT_REASON_NOTIFY)) { - int ndata = 3; + gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); + bool is_mmio = exit_reason.basic == EXIT_REASON_EPT_MISCONFIG; - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV; - vcpu->run->internal.data[0] = vectoring_info; - vcpu->run->internal.data[1] = exit_reason.full; - vcpu->run->internal.data[2] = vmx_get_exit_qual(vcpu); - if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG) { - vcpu->run->internal.data[ndata++] = - vmcs_read64(GUEST_PHYSICAL_ADDRESS); - } - vcpu->run->internal.data[ndata++] = vcpu->arch.last_vmentry_cpu; - vcpu->run->internal.ndata = ndata; + kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, is_mmio); return 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 83fe0a78146f..8ee67fc23e5d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8828,6 +8828,28 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_prepare_emulation_failure_exit); +void kvm_prepare_ev_delivery_failure_exit(struct kvm_vcpu *vcpu, gpa_t gpa, bool is_mmio) +{ + struct kvm_run *run = vcpu->run; + int ndata = 0; + u32 reason, intr_info, error_code; + u64 info1, info2; + + kvm_x86_call(get_exit_info)(vcpu, &reason, &info1, &info2, &intr_info, &error_code); + + run->internal.data[ndata++] = info2; + run->internal.data[ndata++] = reason; + run->internal.data[ndata++] = info1; + if (is_mmio) + run->internal.data[ndata++] = (u64)gpa; + run->internal.data[ndata++] = vcpu->arch.last_vmentry_cpu; + + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV; + run->internal.ndata = ndata; +} +EXPORT_SYMBOL_GPL(kvm_prepare_ev_delivery_failure_exit); + static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type) { struct kvm *kvm = vcpu->kvm; From patchwork Fri Sep 27 16:16:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Orlov X-Patchwork-Id: 13814440 Received: from smtp-fw-52003.amazon.com (smtp-fw-52003.amazon.com [52.119.213.152]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9DDC1C1AA1; Fri, 27 Sep 2024 16:17:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.152 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453832; cv=none; b=HE4d3QNJDjn12Z6ICLeVZToaj1KAxzx+gyKNW88sCt+1iJi5ia2Efx/yjmB2vU7uh559bSHXapouj2NU2VjWiPKqveszi+b9LwkBLmJBJOFUMXk+Hdv1DscB7/gYstPAlnc0MfMfXmBDyZaaxDGUn8HwFR4wyPBszxz8RFufwGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453832; c=relaxed/simple; bh=RghIMU6uIdFNDzCKtnmY2oP8kty8pu1u3nq8CdXw/5Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EkjDY9XuJ8opZf0f2pdn+erUy6+ZYtycoYtK7x1qv/XrsRwJUp5thwsRVxhCnbKAtp/al/rUsxHxW7pr/jNA9whbYO2FUCGCKpf3Y9NodweyZrslLJ5iQzQdYPXFSRqkOQB3gHdGEwBT0H2Y9I/QxtVRmD6T6qXmYwWVcRfEkJU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=WFAN+kEF; arc=none smtp.client-ip=52.119.213.152 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="WFAN+kEF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1727453831; x=1758989831; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=b0/S9vkMMP7VyaImZo2up9+08R6m4hPYhq9a5rpoyds=; b=WFAN+kEFLqhbHnG40tqaJEPrLvGgBjkRTtEEcQnJjy8T9Cc5QqVa2czj unG7QywTk+BaF2rIadncajhcY7Wn3vTzaVz7vu2ryif0DafKKNehip09m SxoBHeWdPh77v9KuF6jZ/0OXgBqSbVbMhfvS/McjNw/NM+1WjA5YWIM9v o=; X-IronPort-AV: E=Sophos;i="6.11,159,1725321600"; d="scan'208";a="28893675" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52003.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2024 16:17:08 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:64200] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.17.12:2525] with esmtp (Farcaster) id c183a50a-f73e-4b03-a672-6b92376c341a; Fri, 27 Sep 2024 16:17:07 +0000 (UTC) X-Farcaster-Flow-ID: c183a50a-f73e-4b03-a672-6b92376c341a Received: from EX19D032EUB002.ant.amazon.com (10.252.61.47) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:07 +0000 Received: from EX19MTAUEB001.ant.amazon.com (10.252.135.35) by EX19D032EUB002.ant.amazon.com (10.252.61.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:06 +0000 Received: from email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.135.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Fri, 27 Sep 2024 16:17:06 +0000 Received: from dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com (dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com [10.253.74.38]) by email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (Postfix) with ESMTPS id 1AFBF403FC; Fri, 27 Sep 2024 16:17:05 +0000 (UTC) From: Ivan Orlov To: , , , , , , CC: Ivan Orlov , , , , , , , , Subject: [PATCH 2/3] KVM: vmx, svm, mmu: Process MMIO during event delivery Date: Fri, 27 Sep 2024 16:16:56 +0000 Message-ID: <20240927161657.68110-3-iorlov@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240927161657.68110-1-iorlov@amazon.com> References: <20240927161657.68110-1-iorlov@amazon.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently, the situation when guest accesses MMIO during event delivery is handled differently in VMX and SVM: on VMX KVM returns internal error with suberror = KVM_INTERNAL_ERROR_DELIVERY_EV, when SVM simply goes into infinite loop trying to deliver an event again and again. Such a situation could happen when the exception occurs with guest IDTR (or GDTR) descriptor base pointing to an MMIO address. Even with fixes for infinite loops on TDP failures applied, the problem still exists on SVM. Eliminate the SVM/VMX difference by returning a KVM internal error with suberror = KVM_INTERNAL_ERROR_DELIVERY_EV for both SVM and VMX. As we don't have a reliable way to detect MMIO operation on SVM before actually looking at the GPA, move the problem detection into the common KVM x86 layer (into the kvm_mmu_page_fault function) and add the PFERR_EVT_DELIVERY flag which gets set in the SVM/VMX specific vmexit handler to signal that we are in the middle of the event delivery. This way we won't introduce much overhead for VMX platform either, as the situation when the guest accesses MMIO during event delivery is quite rare and shouldn't happen frequently. Signed-off-by: Ivan Orlov --- arch/x86/include/asm/kvm_host.h | 6 ++++++ arch/x86/kvm/mmu/mmu.c | 15 ++++++++++++++- arch/x86/kvm/svm/svm.c | 4 ++++ arch/x86/kvm/vmx/vmx.c | 21 +++++++++------------ 4 files changed, 33 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 348daba424dd..a1088239c9f5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -282,6 +282,12 @@ enum x86_intercept_stage; #define PFERR_PRIVATE_ACCESS BIT_ULL(49) #define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS) +/* + * EVT_DELIVERY is a KVM-defined flag used to indicate that a fault occurred + * during event delivery. + */ +#define PFERR_EVT_DELIVERY BIT_ULL(50) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e081f785fb23..36e25a6ba364 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6120,8 +6120,21 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err return -EFAULT; r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); - if (r == RET_PF_EMULATE) + if (r == RET_PF_EMULATE) { + /* + * Check if the guest is accessing MMIO during event delivery. For + * instance, it could happen if the guest sets IDT / GDT descriptor + * base to point to an MMIO address. We can't deliver such an event + * without VMM intervention, so return a corresponding internal error + * instead (otherwise, vCPU will fall into infinite loop trying to + * deliver the event again and again). + */ + if (error_code & PFERR_EVT_DELIVERY) { + kvm_prepare_ev_delivery_failure_exit(vcpu, cr2_or_gpa, true); + return 0; + } goto emulate; + } } if (r == RET_PF_INVALID) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9df3e1e5ae81..93ce8c3d02f3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2059,6 +2059,10 @@ static int npf_interception(struct kvm_vcpu *vcpu) u64 fault_address = svm->vmcb->control.exit_info_2; u64 error_code = svm->vmcb->control.exit_info_1; + /* Check if we have events awaiting delivery */ + if (svm->vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK) + error_code |= PFERR_EVT_DELIVERY; + /* * WARN if hardware generates a fault with an error code that collides * with KVM-defined sythentic flags. Clear the flags and continue on, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index afd785e7f3a3..bbe1126c3c7d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5828,6 +5828,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) static int handle_ept_misconfig(struct kvm_vcpu *vcpu) { gpa_t gpa; + u64 error_code = PFERR_RSVD_MASK; + + /* Do we have events awaiting delivery? */ + error_code |= (to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) + ? PFERR_EVT_DELIVERY : 0; if (vmx_check_emulate_instruction(vcpu, EMULTYPE_PF, NULL, 0)) return 1; @@ -5843,7 +5848,7 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu) return kvm_skip_emulated_instruction(vcpu); } - return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0); + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } static int handle_nmi_window(struct kvm_vcpu *vcpu) @@ -6536,24 +6541,16 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return 0; } - /* - * Note: - * Do not try to fix EXIT_REASON_EPT_MISCONFIG if it caused by - * delivery event since it indicates guest is accessing MMIO. - * The vm-exit can be triggered again after return to guest that - * will cause infinite loop. - */ if ((vectoring_info & VECTORING_INFO_VALID_MASK) && (exit_reason.basic != EXIT_REASON_EXCEPTION_NMI && exit_reason.basic != EXIT_REASON_EPT_VIOLATION && exit_reason.basic != EXIT_REASON_PML_FULL && exit_reason.basic != EXIT_REASON_APIC_ACCESS && exit_reason.basic != EXIT_REASON_TASK_SWITCH && - exit_reason.basic != EXIT_REASON_NOTIFY)) { + exit_reason.basic != EXIT_REASON_NOTIFY && + exit_reason.basic != EXIT_REASON_EPT_MISCONFIG)) { gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); - bool is_mmio = exit_reason.basic == EXIT_REASON_EPT_MISCONFIG; - - kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, is_mmio); + kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, false); return 0; } From patchwork Fri Sep 27 16:16:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Orlov X-Patchwork-Id: 13814442 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60D0518BC26; Fri, 27 Sep 2024 16:17:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.218 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453848; cv=none; b=b5GEP+VojNOqI942MtIm5rsYadjUDDU221okCf3pt2S55nEUcdS8ul7iCjVtfdXct7qH0lOKw7+EKlaXwGnLao5f4TNyFpJl/NpJgh1PIH91zaoLmDxHXtsoGxEzcgKJcayBQQFCf4GKJu0SOF16LwsXKTrtqDBYlaKbjJrp6rg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727453848; c=relaxed/simple; bh=bITM0Aeng2J6eq1p6hx8dT3atihTw0CutEtJW/1eOSY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=t9ll2GHB6+MEuGKe0ai1gkZXKNu6FaXlm3sV6qY9iqgy2Pn6DX18vgiCD/hrttjvy02oCLqLikGhNDVBTgdT3quXOKxxRqi4v7MKmxbWJyL9rViMTZ7DYb4YJJSD5bLCWH9+LvuPu/tHaWZ4Xdf6OFLeeZQ8aatdifBc8e4qFCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=jjew5vEN; arc=none smtp.client-ip=99.78.197.218 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="jjew5vEN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1727453847; x=1758989847; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v/pCxVzGGt00J4HeXCQPyQbX0vz+zoWW8SfA60zWySw=; b=jjew5vENxA3E9TV4WkYMc6Xn069QMqTuqjL3cYUK4LnuofUv0CMYfMsu YYMmhtNE1TKDtYMsT/WPzqJ4VoVkFMDA80MgE6PjvbLBCIaiHpWaaciZE q3lkcyhey0IHWaXtFHRkdbYYF8PswhSYfGewn855c9plc9KTt15/dSBWI U=; X-IronPort-AV: E=Sophos;i="6.11,159,1725321600"; d="scan'208";a="335991852" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2024 16:17:10 +0000 Received: from EX19MTAEUC002.ant.amazon.com [10.0.10.100:14418] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.9.81:2525] with esmtp (Farcaster) id f6dd7ae9-f1d5-4b5e-bace-701045ca9ffd; Fri, 27 Sep 2024 16:17:08 +0000 (UTC) X-Farcaster-Flow-ID: f6dd7ae9-f1d5-4b5e-bace-701045ca9ffd Received: from EX19D031EUB001.ant.amazon.com (10.252.61.29) by EX19MTAEUC002.ant.amazon.com (10.252.51.245) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:08 +0000 Received: from EX19MTAUEA002.ant.amazon.com (10.252.134.9) by EX19D031EUB001.ant.amazon.com (10.252.61.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Fri, 27 Sep 2024 16:17:08 +0000 Received: from email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.134.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Fri, 27 Sep 2024 16:17:07 +0000 Received: from dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com (dev-dsk-iorlov-1b-d2eae488.eu-west-1.amazon.com [10.253.74.38]) by email-imr-corp-prod-iad-all-1b-85daddd1.us-east-1.amazon.com (Postfix) with ESMTPS id 854B8403DD; Fri, 27 Sep 2024 16:17:06 +0000 (UTC) From: Ivan Orlov To: , , , , , , CC: Ivan Orlov , , , , , , , , Subject: [PATCH 3/3] selftests: KVM: Add test case for MMIO during event delivery Date: Fri, 27 Sep 2024 16:16:57 +0000 Message-ID: <20240927161657.68110-4-iorlov@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240927161657.68110-1-iorlov@amazon.com> References: <20240927161657.68110-1-iorlov@amazon.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Extend the 'set_memory_region_test' with a test case which covers the MMIO during event delivery error handling. The test case 1) Tries to set an IDT descriptor base to point to an MMIO address 2) Generates a #GP 3) Verifies that we got a correct exit reason (KVM_EXIT_INTERNAL_ERROR) and suberror code (KVM_INTERNAL_ERROR_DELIVERY_EV) 4) Verifies that we got a corrent "faulty" GPA in internal.data[3] Signed-off-by: Ivan Orlov --- .../selftests/kvm/set_memory_region_test.c | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index a8267628e9ed..e9e97346edf1 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -553,6 +553,51 @@ static void test_add_overlapping_private_memory_regions(void) close(memfd); kvm_vm_free(vm); } + +static const struct desc_ptr faulty_idt_desc = { + .address = MEM_REGION_GPA, + .size = 0xFFF, +}; + +static void guest_code_faulty_idt_desc(void) +{ + __asm__ __volatile__("lidt %0"::"m"(faulty_idt_desc)); + + /* Generate a #GP by dereferencing a non-canonical address */ + *((uint8_t *)0xDEADBEEFDEADBEEFULL) = 0x1; + + /* We should never reach this point */ + GUEST_ASSERT(0); +} + +/* + * This test tries to point the IDT descriptor base to an MMIO address. This action + * should cause a KVM internal error, so the VMM could handle such situations gracefully. + */ +static void test_faulty_idt_desc(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + pr_info("Testing a faulty IDT descriptor pointing to an MMIO address\n"); + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_faulty_idt_desc); + virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA, 1); + + vcpu_run(vcpu); + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR); + TEST_ASSERT(vcpu->run->internal.suberror == KVM_INTERNAL_ERROR_DELIVERY_EV, + "Unexpected suberror = %d", vcpu->run->internal.suberror); + TEST_ASSERT(vcpu->run->internal.ndata > 4, "Unexpected internal error data array size = %d", + vcpu->run->internal.ndata); + + /* The "faulty" GPA address should be = IDT base + offset of the GP vector */ + TEST_ASSERT(vcpu->run->internal.data[3] == MEM_REGION_GPA + + GP_VECTOR * sizeof(struct idt_entry), + "Unexpected GPA = %llx", vcpu->run->internal.data[3]); + + kvm_vm_free(vm); +} #endif int main(int argc, char *argv[]) @@ -568,6 +613,7 @@ int main(int argc, char *argv[]) * KVM_RUN fails with ENOEXEC or EFAULT. */ test_zero_memory_regions(); + test_faulty_idt_desc(); #endif test_invalid_memory_region_flags();