From patchwork Thu Nov 3 13:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17ABAC43217 for ; Thu, 3 Nov 2022 13:59:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231646AbiKCN7B (ORCPT ); Thu, 3 Nov 2022 09:59:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbiKCN6y (ORCPT ); Thu, 3 Nov 2022 09:58:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 801D315717 for ; Thu, 3 Nov 2022 06:57:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483871; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MNLPTKjKZGsyp4iQjia1o+n+MQq5Qzv7rr6Sw6M18Rc=; b=Q3APWHGDLu2GbbxLz2ga451L0touY/0jDTL/i9EungzB3cFOQ43MyL7DsKzawdYomoFRTC YKN9p7id7dT/4aXfZ4ktx5gJDZvjN2brOm5ndNeAE7RtPUiu1IQqPegm8wEa8ZfBQMvCIV 9g9RdEBm6yzvy61gF5FaLpJ+kACBXsw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-655-0tTLQ6N2NNiqYTNDc7oPOQ-1; Thu, 03 Nov 2022 09:57:49 -0400 X-MC-Unique: 0tTLQ6N2NNiqYTNDc7oPOQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7A11A1C1434A; Thu, 3 Nov 2022 13:57:44 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11AE840C83D9; Thu, 3 Nov 2022 13:57:40 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 1/9] KVM: x86: nSVM: leave nested mode on vCPU free Date: Thu, 3 Nov 2022 15:57:28 +0200 Message-Id: <20221103135736.42295-2-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org If the VM was terminated while nested, we free the nested state while the vCPU still is in nested mode. Soon a warning will be added for this condition. Cc: stable@vger.kernel.org Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d22a809d923339..e9cec1b692051c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1440,6 +1440,7 @@ static void svm_vcpu_free(struct kvm_vcpu *vcpu) */ svm_clear_current_vmcb(svm->vmcb); + svm_leave_nested(vcpu); svm_free_nested(svm); sev_free_vcpu(vcpu); From patchwork Thu Nov 3 13:57:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B57BCC4332F for ; Thu, 3 Nov 2022 13:59:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230296AbiKCN7G (ORCPT ); Thu, 3 Nov 2022 09:59:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbiKCN64 (ORCPT ); Thu, 3 Nov 2022 09:58:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4469B140F2 for ; Thu, 3 Nov 2022 06:57:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483874; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZLirfmVsMl7Z8L0J+XcE1Zsp+gCHNtOKzvKZLLJ8RdA=; b=GYmun3fF0WOCUTTblZxpN+hX+38RSWMN/OTPBucV6NgBonqtj2Mp+ldWZ9mtwl5D/067tA 0rx6C7ciJPPOSJXQMVCc2yatOj9f8P/rq/wQiUWAm6K9dTMtey5leTDKDurywb5sSh4w6u +FtE86Z8bdgBv4TBBUk6U2j1fZVbQBA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-271-J74K2mJrOAGRjdR_58cgEA-1; Thu, 03 Nov 2022 09:57:50 -0400 X-MC-Unique: J74K2mJrOAGRjdR_58cgEA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2F6B33C0DDD0; Thu, 3 Nov 2022 13:57:49 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id C1F3040C2087; Thu, 3 Nov 2022 13:57:44 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 2/9] KVM: x86: nSVM: harden svm_free_nested against freeing vmcb02 while still in use Date: Thu, 3 Nov 2022 15:57:29 +0200 Message-Id: <20221103135736.42295-3-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Make sure that KVM uses vmcb01 before freeing nested state, and warn if that is not the case. This is a minimal fix for CVE-2022-3344 making the kernel print a warning instead of a kernel panic. Cc: stable@vger.kernel.org Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b258d6988f5dde..b74da40c1fc40c 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1126,6 +1126,9 @@ void svm_free_nested(struct vcpu_svm *svm) if (!svm->nested.initialized) return; + if (WARN_ON_ONCE(svm->vmcb != svm->vmcb01.ptr)) + svm_switch_vmcb(svm, &svm->vmcb01); + svm_vcpu_free_msrpm(svm->nested.msrpm); svm->nested.msrpm = NULL; From patchwork Thu Nov 3 13:57:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A824C43217 for ; Thu, 3 Nov 2022 13:59:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231721AbiKCN7o (ORCPT ); Thu, 3 Nov 2022 09:59:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231804AbiKCN7O (ORCPT ); Thu, 3 Nov 2022 09:59:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64F47165B5 for ; Thu, 3 Nov 2022 06:58:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483879; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=94fXNJlMtb+F/0JTvy69iWSeE6FRHdWRb8tmVfQpuZA=; b=i9hvzahQMLZmjGFa7wSWGADb9VceZfIwDmpa+K8roTI8ZOzMXHRiKthE2J1jXg7WI1KCgk yw3FyszEfJsIfLKjf5pnzaSlxs8L/80vD4qA7KZeaXeszVYpOmf69NZv4xhBjY4TE1OAE+ 32ycO79YoG0CwGJAmNu6/r2PLPxLuz0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-573-hIm7tAVWNzu-RCY0-B8plw-1; Thu, 03 Nov 2022 09:57:53 -0400 X-MC-Unique: hIm7tAVWNzu-RCY0-B8plw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DB7E21C14345; Thu, 3 Nov 2022 13:57:52 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 76BD940C83AD; Thu, 3 Nov 2022 13:57:49 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 3/9] KVM: x86: add kvm_leave_nested Date: Thu, 3 Nov 2022 15:57:30 +0200 Message-Id: <20221103135736.42295-4-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org add kvm_leave_nested which wraps a call to nested_ops->leave_nested into a function. Cc: stable@vger.kernel.org Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 3 --- arch/x86/kvm/vmx/nested.c | 3 --- arch/x86/kvm/x86.c | 8 +++++++- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b74da40c1fc40c..bcc4f6620f8aec 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1147,9 +1147,6 @@ void svm_free_nested(struct vcpu_svm *svm) svm->nested.initialized = false; } -/* - * Forcibly leave nested mode in order to be able to reset the VCPU later on. - */ void svm_leave_nested(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 61a2e551640a08..1ebe141a0a015f 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6441,9 +6441,6 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, return kvm_state.size; } -/* - * Forcibly leave nested mode in order to be able to reset the VCPU later on. - */ void vmx_leave_nested(struct kvm_vcpu *vcpu) { if (is_guest_mode(vcpu)) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cd9eb13e2ed7fc..316ab1d5317f92 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -627,6 +627,12 @@ static void kvm_queue_exception_vmexit(struct kvm_vcpu *vcpu, unsigned int vecto ex->payload = payload; } +/* Forcibly leave the nested mode in cases like a vCPU reset */ +static void kvm_leave_nested(struct kvm_vcpu *vcpu) +{ + kvm_x86_ops.nested_ops->leave_nested(vcpu); +} + static void kvm_multiple_exception(struct kvm_vcpu *vcpu, unsigned nr, bool has_error, u32 error_code, bool has_payload, unsigned long payload, bool reinject) @@ -5193,7 +5199,7 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, if (events->flags & KVM_VCPUEVENT_VALID_SMM) { #ifdef CONFIG_KVM_SMM if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) { - kvm_x86_ops.nested_ops->leave_nested(vcpu); + kvm_leave_nested(vcpu); kvm_smm_changed(vcpu, events->smi.smm); } From patchwork Thu Nov 3 13:57:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70F3EC433FE for ; Thu, 3 Nov 2022 13:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231680AbiKCN7d (ORCPT ); Thu, 3 Nov 2022 09:59:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231694AbiKCN7J (ORCPT ); Thu, 3 Nov 2022 09:59:09 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64B6F15FD1 for ; Thu, 3 Nov 2022 06:58:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/YfhlfHGgnctluMl7Qrmit8OONQl7OJvXkn7jff4wDA=; b=UvKj1ktmVfwaN34n/OaRCw7aqekRht9w2OEjgvgzDTZ2OIU+VoiclDvM13UHiyZs8sGrEn Da71LZQNnA2qFId0tReLh9Z9TzpU2FTOT35h6NIm0ut2uUirg9kY2PEvlx92+4f/v3DUow IXhHezt3j7ULXFJXaIJe5BVUykrfrXQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-411-kIsNtGYQNQSJL-Arnu4H7w-1; Thu, 03 Nov 2022 09:57:57 -0400 X-MC-Unique: kIsNtGYQNQSJL-Arnu4H7w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8FA198027F5; Thu, 3 Nov 2022 13:57:56 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2D89740C83AD; Thu, 3 Nov 2022 13:57:53 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 4/9] KVM: x86: forcibly leave nested mode on vCPU reset Date: Thu, 3 Nov 2022 15:57:31 +0200 Message-Id: <20221103135736.42295-5-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org While not obivous, kvm_vcpu_reset() leaves the nested mode by clearing 'vcpu->arch.hflags' but it does so without all the required housekeeping. On SVM, it is possible to have a vCPU reset while in guest mode because unlike VMX, on SVM, INIT's are not latched in SVM non root mode and in addition to that L1 doesn't have to intercept triple fault, which should also trigger L1's reset if happens in L2 while L1 didn't intercept it. If one of the above conditions happen, KVM will continue to use vmcb02 while not having in the guest mode. Later the IA32_EFER will be cleared which will lead to freeing of the nested guest state which will (correctly) free the vmcb02, but since KVM still uses it (incorrectly) this will lead to a use after free and kernel crash. This issue is assigned CVE-2022-3344 Cc: stable@vger.kernel.org Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 316ab1d5317f92..3fd900504e683b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11694,8 +11694,18 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) WARN_ON_ONCE(!init_event && (old_cr0 || kvm_read_cr3(vcpu) || kvm_read_cr4(vcpu))); + /* + * SVM doesn't unconditionally VM-Exit on INIT and SHUTDOWN, thus it's + * possible to INIT the vCPU while L2 is active. Force the vCPU back + * into L1 as EFER.SVME is cleared on INIT (along with all other EFER + * bits), i.e. virtualization is disabled. + */ + if (is_guest_mode(vcpu)) + kvm_leave_nested(vcpu); + kvm_lapic_reset(vcpu, init_event); + WARN_ON_ONCE(is_guest_mode(vcpu) || is_smm(vcpu)); vcpu->arch.hflags = 0; vcpu->arch.smi_pending = 0; From patchwork Thu Nov 3 13:57:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DF95C433FE for ; Thu, 3 Nov 2022 13:59:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231804AbiKCN7p (ORCPT ); Thu, 3 Nov 2022 09:59:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231805AbiKCN7O (ORCPT ); Thu, 3 Nov 2022 09:59:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E3CE167E2 for ; Thu, 3 Nov 2022 06:58:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZJCktf5q7jZ9/undR/aqzDAHm9J60Hmp1ceVzBBD+a0=; b=d8Ku9R5NNYe54WWUHgUuEhBF7F81QUr0qrYeL6QVqsWpcJe2mmKVRGACyPN8GXIVo5dMHh ayujmVAvwqq4931ibpsI9nnxrAvXF8vFgcVZ+rK6zKKvwQX9gaVw2DMm94yEUmTXUWYht6 /w6eYupn5OksKAGpf8PwCcLhUAduWMo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-306-_DK-PyTPOtS6QuwQ90BdBQ-1; Thu, 03 Nov 2022 09:58:01 -0400 X-MC-Unique: _DK-PyTPOtS6QuwQ90BdBQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E92B3C0DDD0; Thu, 3 Nov 2022 13:58:00 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id D832040C83AD; Thu, 3 Nov 2022 13:57:56 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org Subject: [PATCH 5/9] KVM: selftests: move idt_entry to header Date: Thu, 3 Nov 2022 15:57:32 +0200 Message-Id: <20221103135736.42295-6-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org struct idt_entry will be used for a test which will break IDT on purpose. Signed-off-by: Maxim Levitsky --- .../selftests/kvm/include/x86_64/processor.h | 13 +++++++++++++ tools/testing/selftests/kvm/lib/x86_64/processor.c | 13 ------------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index e8ca0d8a6a7e0a..5da0c5e2a7afc4 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -748,6 +748,19 @@ struct ex_regs { uint64_t rflags; }; +struct idt_entry { + uint16_t offset0; + uint16_t selector; + uint16_t ist : 3; + uint16_t : 5; + uint16_t type : 4; + uint16_t : 1; + uint16_t dpl : 2; + uint16_t p : 1; + uint16_t offset1; + uint32_t offset2; uint32_t reserved; +}; + void vm_init_descriptor_tables(struct kvm_vm *vm); void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu); void vm_install_exception_handler(struct kvm_vm *vm, int vector, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 39c4409ef56a6a..41c1c73c464d48 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1074,19 +1074,6 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) } } -struct idt_entry { - uint16_t offset0; - uint16_t selector; - uint16_t ist : 3; - uint16_t : 5; - uint16_t type : 4; - uint16_t : 1; - uint16_t dpl : 2; - uint16_t p : 1; - uint16_t offset1; - uint32_t offset2; uint32_t reserved; -}; - static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr, int dpl, unsigned short selector) { From patchwork Thu Nov 3 13:57:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3340AC4167B for ; Thu, 3 Nov 2022 13:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229873AbiKCN7q (ORCPT ); Thu, 3 Nov 2022 09:59:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231419AbiKCN7P (ORCPT ); Thu, 3 Nov 2022 09:59:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E4C2167ED for ; Thu, 3 Nov 2022 06:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wtyj4st2IPLZIWbkO/WxtNaf/mHZkAY+PB7RWK3OD5M=; b=MuxFJR+KU1gqySrk8kueFN3k6OzecZ/7L/VkifTi8fuCP4m+hsK7moxsw4LFuRQ6k0C0k/ yJRSDlHq5Yp8G92H9sHPXWS+a591Lb0qi5eQbDVfaEsi70d/ERCEVS2cqmv2G5HiOx3p4F KqnPvBzVgClAuV1H4qR43C7yXjXAcmo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-340-XFwhDOxkPSmjBoMFEz27IQ-1; Thu, 03 Nov 2022 09:58:04 -0400 X-MC-Unique: XFwhDOxkPSmjBoMFEz27IQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BA5222A2AD79; Thu, 3 Nov 2022 13:58:03 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7813540C2140; Thu, 3 Nov 2022 13:58:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org Subject: [PATCH 6/9] kvm: selftests: add svm nested shutdown test Date: Thu, 3 Nov 2022 15:57:33 +0200 Message-Id: <20221103135736.42295-7-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add test that tests that on SVM if L1 doesn't intercept SHUTDOWN, then L2 crashes L1 and doesn't crash L2 Signed-off-by: Maxim Levitsky --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/svm_nested_shutdown_test.c | 71 +++++++++++++++++++ 3 files changed, 73 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 2f0d705db9dba5..05d980fb083d17 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -41,6 +41,7 @@ /x86_64/svm_vmcall_test /x86_64/svm_int_ctl_test /x86_64/svm_nested_soft_inject_test +/x86_64/svm_nested_shutdown_test /x86_64/sync_regs_test /x86_64/tsc_msrs_test /x86_64/tsc_scaling_sync diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0172eb6cb6eee2..4a2caef2c9396f 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -101,6 +101,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test +TEST_GEN_PROGS_x86_64 += x86_64/svm_nested_shutdown_test TEST_GEN_PROGS_x86_64 += x86_64/svm_nested_soft_inject_test TEST_GEN_PROGS_x86_64 += x86_64/tsc_scaling_sync TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c new file mode 100644 index 00000000000000..3155edf81f5474 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * svm_nested_shutdown_test + * + * Copyright (C) 2022, Red Hat, Inc. + * + * Nested SVM testing: test that unintercepted shutdown in L2 doesn't crash the host + */ + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" + + +static void l2_guest_code(struct svm_test_data *svm) +{ + __asm__ __volatile__("ud2"); +} + +static void l1_guest_code(struct svm_test_data *svm, struct idt_entry * idt) +{ + #define L2_GUEST_STACK_SIZE 64 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + struct vmcb *vmcb = svm->vmcb; + + generic_svm_setup(svm, l2_guest_code, + &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + vmcb->control.intercept &= ~(BIT(INTERCEPT_SHUTDOWN)); + + idt[6].p = 0; // #UD is intercepted but its injection will cause #NP + idt[11].p = 0; // #NP is not intercepted and will cause another #NP will be be converted to #DF + idt[8].p = 0; // #DF will cause #NP which will cause SHUTDOWN + + run_guest(vmcb, svm->vmcb_gpa); + + /* should not reach here */ + GUEST_ASSERT(0); +} + +void test_run(bool emulated_shutdown) +{ +} + +int main(int argc, char *argv[]) +{ + struct kvm_vcpu *vcpu; + struct kvm_run *run; + vm_vaddr_t svm_gva; + struct kvm_vm *vm; + + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM)); + + vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + vcpu_alloc_svm(vm, &svm_gva); + + vcpu_args_set(vcpu, 2, svm_gva, vm->idt); + run = vcpu->run; + + vcpu_run(vcpu); + TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN, + "Got exit_reason other than KVM_EXIT_SHUTDOWN: %u (%s)\n", + run->exit_reason, + exit_reason_str(run->exit_reason)); + + kvm_vm_free(vm); +} From patchwork Thu Nov 3 13:57:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 966EDC4167D for ; Thu, 3 Nov 2022 13:59:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbiKCN7s (ORCPT ); Thu, 3 Nov 2022 09:59:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231639AbiKCN7U (ORCPT ); Thu, 3 Nov 2022 09:59:20 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3026715FC1 for ; Thu, 3 Nov 2022 06:58:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483897; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PcvbZGqx/pbGysS7eWOgjCD7HbL3pDHBihn3geMPwgo=; b=Q/S+Van2RO6RyeoB27Psn+MtSfKjXGHH4sQ7ee17jnUvVUXeto/c7hP/M++biSQ9dSP2Fu SbrI7XsEbWXqUiJgZ7YZjDto+1lLohMLpY7UzJ92vy6RLAblG1/5OSGPVvBUlZJdhWEGLk RieH2Zk9QMyM1zYK6+4gSu9eixuJbrE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-61-7RUDoIwgNuizMCmzPY9Xug-1; Thu, 03 Nov 2022 09:58:08 -0400 X-MC-Unique: 7RUDoIwgNuizMCmzPY9Xug-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5013B3C0DDD4; Thu, 3 Nov 2022 13:58:07 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E4AA40C2087; Thu, 3 Nov 2022 13:58:03 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org Subject: [PATCH 7/9] KVM: x86: allow L1 to not intercept triple fault Date: Thu, 3 Nov 2022 15:57:34 +0200 Message-Id: <20221103135736.42295-8-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This is SVM correctness fix - although a sane L1 would intercept SHUTDOWN event, it doesn't have to, so we have to honour this. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 6 ++++++ arch/x86/kvm/vmx/nested.c | 1 + arch/x86/kvm/x86.c | 11 ++++++----- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index bcc4f6620f8aec..3aa9184d1e4ed7 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1092,6 +1092,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm) static void nested_svm_triple_fault(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm = to_svm(vcpu); + + if (!vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_SHUTDOWN)) + return; + + kvm_clear_request(KVM_REQ_TRIPLE_FAULT, vcpu); nested_svm_simple_vmexit(to_svm(vcpu), SVM_EXIT_SHUTDOWN); } diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1ebe141a0a015f..7924dea9367813 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4855,6 +4855,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, static void nested_vmx_triple_fault(struct kvm_vcpu *vcpu) { + kvm_clear_request(KVM_REQ_TRIPLE_FAULT, vcpu); nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3fd900504e683b..f0a0102a78f5c3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9741,7 +9741,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu) int kvm_check_nested_events(struct kvm_vcpu *vcpu) { - if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { + if (kvm_test_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { kvm_x86_ops.nested_ops->triple_fault(vcpu); return 1; } @@ -10255,15 +10255,16 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) r = 0; goto out; } - if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { - if (is_guest_mode(vcpu)) { + if (kvm_test_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { + if (is_guest_mode(vcpu)) kvm_x86_ops.nested_ops->triple_fault(vcpu); - } else { + + if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN; vcpu->mmio_needed = 0; r = 0; - goto out; } + goto out; } if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) { /* Page is swapped out. Do synthetic halt */ From patchwork Thu Nov 3 13:57:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB228C433FE for ; Thu, 3 Nov 2022 13:59:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231336AbiKCN7r (ORCPT ); Thu, 3 Nov 2022 09:59:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231465AbiKCN7T (ORCPT ); Thu, 3 Nov 2022 09:59:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81BF815815 for ; Thu, 3 Nov 2022 06:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483896; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xh9XjKVpdKktTC0L857mGk/x8fhesffzK2L3gtVSzgY=; b=Ca27ZdQfPl2Q7Bb1ZTWGqOJUKD8tEeGihgVjDw/gQ/foJ19sMpY48f9DVmkuV6AiFiVH8Y EmCcMIJQA8FMUOkTDD6NF1O5lTHr77JCl4baTzbWk3bqq+WjTBDya66HKqePWbhGiKaiaj SB7UiukrLtpvxsfu+HgcMPlJA4dji4U= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-6i5D2xr4M0GzaFidO2bTDQ-1; Thu, 03 Nov 2022 09:58:12 -0400 X-MC-Unique: 6i5D2xr4M0GzaFidO2bTDQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E16863C0F429; Thu, 3 Nov 2022 13:58:11 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 97BC440C83D9; Thu, 3 Nov 2022 13:58:07 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org Subject: [PATCH 8/9] KVM: selftests: add svm part to triple_fault_test Date: Thu, 3 Nov 2022 15:57:35 +0200 Message-Id: <20221103135736.42295-9-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a SVM implementation to triple_fault_test to test that emulated/injected shutdown works. Since instead of the VMX, the SVM allows the hypervisor to avoid intercepting shutdown in guest, don't intercept shutdown to test that KVM suports this correctly. Signed-off-by: Maxim Levitsky --- .../kvm/x86_64/triple_fault_event_test.c | 71 ++++++++++++++----- 1 file changed, 54 insertions(+), 17 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c b/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c index 70b44f0b52fef2..78d2941f7c1f0d 100644 --- a/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c +++ b/tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c @@ -3,6 +3,7 @@ #include "kvm_util.h" #include "processor.h" #include "vmx.h" +#include "svm_util.h" #include #include @@ -20,10 +21,11 @@ static void l2_guest_code(void) : : [port] "d" (ARBITRARY_IO_PORT) : "rax"); } -void l1_guest_code(struct vmx_pages *vmx) -{ #define L2_GUEST_STACK_SIZE 64 - unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; +unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + +void l1_guest_code_vmx(struct vmx_pages *vmx) +{ GUEST_ASSERT(vmx->vmcs_gpa); GUEST_ASSERT(prepare_for_vmx_operation(vmx)); @@ -38,24 +40,51 @@ void l1_guest_code(struct vmx_pages *vmx) GUEST_DONE(); } +void l1_guest_code_svm(struct svm_test_data *svm) +{ + struct vmcb *vmcb = svm->vmcb; + + generic_svm_setup(svm, l2_guest_code, + &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + /* don't intercept shutdown to test the case of SVM allowing to do so */ + vmcb->control.intercept &= ~(BIT(INTERCEPT_SHUTDOWN)); + + run_guest(vmcb, svm->vmcb_gpa); + + /* should not reach here, L1 should crash */ + GUEST_ASSERT(0); +} + int main(void) { struct kvm_vcpu *vcpu; struct kvm_run *run; struct kvm_vcpu_events events; - vm_vaddr_t vmx_pages_gva; struct ucall uc; - TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); + bool has_vmx = kvm_cpu_has(X86_FEATURE_VMX); + bool has_svm = kvm_cpu_has(X86_FEATURE_SVM); + + TEST_REQUIRE(has_vmx || has_svm); TEST_REQUIRE(kvm_has_cap(KVM_CAP_X86_TRIPLE_FAULT_EVENT)); - vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code); - vm_enable_cap(vm, KVM_CAP_X86_TRIPLE_FAULT_EVENT, 1); + if (has_vmx) { + vm_vaddr_t vmx_pages_gva; + vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_vmx); + vcpu_alloc_vmx(vm, &vmx_pages_gva); + vcpu_args_set(vcpu, 1, vmx_pages_gva); + } else { + vm_vaddr_t svm_gva; + vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_svm); + vcpu_alloc_svm(vm, &svm_gva); + vcpu_args_set(vcpu, 1, svm_gva); + } + + vm_enable_cap(vm, KVM_CAP_X86_TRIPLE_FAULT_EVENT, 1); run = vcpu->run; - vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vcpu, 1, vmx_pages_gva); vcpu_run(vcpu); TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, @@ -78,13 +107,21 @@ int main(void) "No triple fault pending"); vcpu_run(vcpu); - switch (get_ucall(vcpu, &uc)) { - case UCALL_DONE: - break; - case UCALL_ABORT: - REPORT_GUEST_ASSERT(uc); - default: - TEST_FAIL("Unexpected ucall: %lu", uc.cmd); - } + if (has_svm) { + TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN, + "Got exit_reason other than KVM_EXIT_SHUTDOWN: %u (%s)\n", + run->exit_reason, + exit_reason_str(run->exit_reason)); + return 0; + } else { + switch (get_ucall(vcpu, &uc)) { + case UCALL_DONE: + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + default: + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + } } From patchwork Thu Nov 3 13:57:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13030030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBCB4C43219 for ; Thu, 3 Nov 2022 13:59:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbiKCN7t (ORCPT ); Thu, 3 Nov 2022 09:59:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231320AbiKCN71 (ORCPT ); Thu, 3 Nov 2022 09:59:27 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FA34C2E for ; Thu, 3 Nov 2022 06:58:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667483900; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FT9aNXqnuKXbR3StefJr7RNDWsInEBudWILSzsOk6wI=; b=ZStsJyONMCsoHwo5LLcpnF7pFILeVU1IkFTSSgw1e3VO37vc0s8THpXhbBpRq30FzY0yQM X1Pa/81of2kwlfu2PrKiv7nQ6qgNqxYwNW6xhYBbECZ7CNAmRU/Kwv7YboiHQK3DYSowCG niGSemAeHFQvbAJ9++zxkU1r1phBL+Q= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-381-1eUDsmvBMwWsz4l23RjwKw-1; Thu, 03 Nov 2022 09:58:16 -0400 X-MC-Unique: 1eUDsmvBMwWsz4l23RjwKw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9F115101A528; Thu, 3 Nov 2022 13:58:15 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 352B740C83AD; Thu, 3 Nov 2022 13:58:12 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , "H. Peter Anvin" , Shuah Khan , Yang Zhong , Wei Wang , Colton Lewis , Sean Christopherson , Jim Mattson , Chenyi Qiang , Borislav Petkov , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Dave Hansen , Ingo Molnar , David Matlack , Peter Xu , Maxim Levitsky , linux-kselftest@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH 9/9] KVM: x86: remove exit_int_info warning in svm_handle_exit Date: Thu, 3 Nov 2022 15:57:36 +0200 Message-Id: <20221103135736.42295-10-mlevitsk@redhat.com> In-Reply-To: <20221103135736.42295-1-mlevitsk@redhat.com> References: <20221103135736.42295-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org It is valid to receive external interrupt and have broken IDT entry, which will lead to #GP with exit_int_into that will contain the index of the IDT entry (e.g any value). Other exceptions can happen as well, like #NP or #SS (if stack switch fails). Thus this warning can be user triggred and has very little value. Cc: stable@vger.kernel.org Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e9cec1b692051c..36f651ce842174 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3428,15 +3428,6 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return 0; } - if (is_external_interrupt(svm->vmcb->control.exit_int_info) && - exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR && - exit_code != SVM_EXIT_NPF && exit_code != SVM_EXIT_TASK_SWITCH && - exit_code != SVM_EXIT_INTR && exit_code != SVM_EXIT_NMI) - printk(KERN_ERR "%s: unexpected exit_int_info 0x%x " - "exit_code 0x%x\n", - __func__, svm->vmcb->control.exit_int_info, - exit_code); - if (exit_fastpath != EXIT_FASTPATH_NONE) return 1;