From patchwork Wed May 20 17:21:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560873 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59C2C912 for ; Wed, 20 May 2020 17:21:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 41BC92075F for ; Wed, 20 May 2020 17:21:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Nt1GIeMw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726836AbgETRVw (ORCPT ); Wed, 20 May 2020 13:21:52 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:51966 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726439AbgETRVv (ORCPT ); Wed, 20 May 2020 13:21:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995311; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=WrBR4w78uBBs4VQ3VrdJAzQ/9ZZhP1UKmnG9BOpHw14=; b=Nt1GIeMwWsE/Gy86bi6nYa7LBGB/H8I58kZH6Nns5IXWAfOVi9ihX0m9V6nVYSCzHSWFyT Xj80PwQ5LoFAMkw9eEYA+1zA+cc/XqpUEatfHSXR7RcnEASZKc/lQWrvwWy9AyfZynI3wC yR46IusMGZWy/Rot9PDxaLX05djDsKs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-78-Z5e_C_Z6Obud2Kubqpe_6Q-1; Wed, 20 May 2020 13:21:48 -0400 X-MC-Unique: Z5e_C_Z6Obud2Kubqpe_6Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 80443A0BE6; Wed, 20 May 2020 17:21:47 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0434E60610; Wed, 20 May 2020 17:21:46 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel , stable@vger.kernel.org Subject: [PATCH 01/24] KVM: nSVM: fix condition for filtering async PF Date: Wed, 20 May 2020 13:21:22 -0400 Message-Id: <20200520172145.23284-2-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Async page faults have to be trapped in the host (L1 in this case), since the APF reason was passed from L0 to L1 and stored in the L1 APF data page. This was completely reversed: the page faults were passed to the guest, a L2 hypervisor. Cc: stable@vger.kernel.org Reviewed-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index a89a166d1cb8..f4cd2d0cc360 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -880,8 +880,8 @@ int nested_svm_exit_special(struct vcpu_svm *svm) return NESTED_EXIT_HOST; break; case SVM_EXIT_EXCP_BASE + PF_VECTOR: - /* When we're shadowing, trap PFs, but not async PF */ - if (!npt_enabled && svm->vcpu.arch.apf.host_apf_reason == 0) + /* Trap async PF even if not shadowing */ + if (!npt_enabled || svm->vcpu.arch.apf.host_apf_reason) return NESTED_EXIT_HOST; break; default: From patchwork Wed May 20 17:21:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560917 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 450E2912 for ; Wed, 20 May 2020 17:24:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DCA6207D4 for ; Wed, 20 May 2020 17:24:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HWUL3s5G" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728334AbgETRX4 (ORCPT ); Wed, 20 May 2020 13:23:56 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:24557 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727975AbgETRWE (ORCPT ); Wed, 20 May 2020 13:22:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Q+b6NQYzxi9xQtxvdZVxdcLNZ7D1Hncrtt81Kk0DSak=; b=HWUL3s5G7R0YMLbXoUyeHm2PyH8vx03z4j1CCuNxuX3ONYyFnnvnb6uWHPm74hd/Z3vyiE 00NcO1ydVYBhkQgBeNSF2ka3Rco7tKQIpVniv1cpdyEwpigHAFgzmNTBVVXdqcHWm6xUS0 Gz01hMHZwDdX8pmiCmL9SSGncMo84Gc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-185-DG15L6McMBqg3yiYId3LCw-1; Wed, 20 May 2020 13:21:59 -0400 X-MC-Unique: DG15L6McMBqg3yiYId3LCw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 20CD019200F7; Wed, 20 May 2020 17:21:48 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CA6560C20; Wed, 20 May 2020 17:21:47 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel , stable@vger.kernel.org Subject: [PATCH 02/24] KVM: nSVM: leave ASID aside in copy_vmcb_control_area Date: Wed, 20 May 2020 13:21:23 -0400 Message-Id: <20200520172145.23284-3-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Restoring the ASID from the hsave area on VMEXIT is wrong, because its value depends on the handling of TLB flushes. Just skipping the field in copy_vmcb_control_area will do. Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index f4cd2d0cc360..d544cce4f964 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -150,7 +150,7 @@ static void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *from_vmcb dst->iopm_base_pa = from->iopm_base_pa; dst->msrpm_base_pa = from->msrpm_base_pa; dst->tsc_offset = from->tsc_offset; - dst->asid = from->asid; + /* asid not copied, it is handled manually for svm->vmcb. */ dst->tlb_ctl = from->tlb_ctl; dst->int_ctl = from->int_ctl; dst->int_vector = from->int_vector; From patchwork Wed May 20 17:21:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF69B912 for ; Wed, 20 May 2020 17:24:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 987F8207D3 for ; Wed, 20 May 2020 17:24:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cUqrUChF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727973AbgETRWD (ORCPT ); Wed, 20 May 2020 13:22:03 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:24743 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727918AbgETRWA (ORCPT ); Wed, 20 May 2020 13:22:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995319; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=3b4tcQFYxfckOxLBmeC1hOIW6HRUkgSyBcFd6KivnZY=; b=cUqrUChFxBxgLGcBpS/uYgG8rn6NSUffBEMIcgjyAgKpAekxgHnugvYSdHsmTM9AR+qc5q Lb7hZVDRLAA+E8Kk1FDQmrEvyAZkhUJSKsWFl5RqXR5RG/ITcCwg7rpAQx1MuqUmOdQ6eb +n9JKvHqkc4mYsfHgnX/fBie3qKVFiw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-475-zeOqZ5aqNfeE6xDcpqY4uA-1; Wed, 20 May 2020 13:21:56 -0400 X-MC-Unique: zeOqZ5aqNfeE6xDcpqY4uA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1A5CC81CBF5; Wed, 20 May 2020 17:21:50 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9147182A33; Wed, 20 May 2020 17:21:49 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 04/24] KVM: nSVM: remove exit_required Date: Wed, 20 May 2020 13:21:25 -0400 Message-Id: <20200520172145.23284-5-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org All events now inject vmexits before vmentry rather than after vmexit. Therefore, exit_required is not set anymore and we can remove it. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 3 +-- arch/x86/kvm/svm/svm.c | 14 -------------- arch/x86/kvm/svm/svm.h | 3 --- 3 files changed, 1 insertion(+), 19 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e80349132ea1..dd2868dd6129 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -785,8 +785,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); bool block_nested_events = - kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required || - svm->nested.nested_run_pending; + kvm_event_needs_reinjection(vcpu) || svm->nested.nested_run_pending; if (vcpu->arch.exception.pending) { if (block_nested_events) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9da4e5b6d724..04332d0efa5f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2889,13 +2889,6 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) if (npt_enabled) vcpu->arch.cr3 = svm->vmcb->save.cr3; - if (unlikely(svm->nested.exit_required)) { - nested_svm_vmexit(svm); - svm->nested.exit_required = false; - - return 1; - } - if (is_guest_mode(vcpu)) { int vmexit; @@ -3327,13 +3320,6 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; - /* - * A vmexit emulation is required before the vcpu can be executed - * again. - */ - if (unlikely(svm->nested.exit_required)) - return EXIT_FASTPATH_NONE; - /* * Disable singlestep if we're injecting an interrupt/exception. * We don't want our modified rflags to be pushed on the stack where diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 8342032291fc..89fab75dd4f5 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -95,9 +95,6 @@ struct nested_state { u64 vmcb_msrpm; u64 vmcb_iopm; - /* A VMEXIT is required but not yet emulated */ - bool exit_required; - /* A VMRUN has started but has not yet been performed, so * we cannot inject a nested vmexit yet. */ bool nested_run_pending; From patchwork Wed May 20 17:21:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560913 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EABF9913 for ; Wed, 20 May 2020 17:23:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C94FA20849 for ; Wed, 20 May 2020 17:23:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KPavvcHQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728326AbgETRXo (ORCPT ); Wed, 20 May 2020 13:23:44 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:38373 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727992AbgETRWG (ORCPT ); Wed, 20 May 2020 13:22:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=VMR19ZD8UeRCsHYzV0AWBiEjUYt1N8hlKIv2air0PlU=; b=KPavvcHQXIQp3ymNQC9IthRmpl2uCCuUL5tkZRHRwGBVGwyMpc3Ny7FYBMvprJ7AyZfxET 7oa5IweUHED/xlpZejAbGo43QjN2kYRm8arNQ6piq3BaxUPRU+Vh9s1FQG9ccE0L9IFJsa 5phWRdJbDZFeBnb58m/A7SQAnUt/G+A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-298-y176hgGONCS36wM9FBOrlQ-1; Wed, 20 May 2020 13:22:00 -0400 X-MC-Unique: y176hgGONCS36wM9FBOrlQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C6D2D1B18BC1; Wed, 20 May 2020 17:21:51 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 363F882A33; Wed, 20 May 2020 17:21:50 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 05/24] KVM: nSVM: correctly inject INIT vmexits Date: Wed, 20 May 2020 13:21:26 -0400 Message-Id: <20200520172145.23284-6-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The usual drill at this point, except there is no code to remove because this case was not handled at all. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index dd2868dd6129..7efefceb5f2f 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -25,6 +25,7 @@ #include "trace.h" #include "mmu.h" #include "x86.h" +#include "lapic.h" #include "svm.h" static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, @@ -781,11 +782,37 @@ static void nested_svm_intr(struct vcpu_svm *svm) nested_svm_vmexit(svm); } +static inline bool nested_exit_on_init(struct vcpu_svm *svm) +{ + return (svm->nested.intercept & (1ULL << INTERCEPT_INIT)); +} + +static void nested_svm_init(struct vcpu_svm *svm) +{ + svm->vmcb->control.exit_code = SVM_EXIT_INIT; + svm->vmcb->control.exit_info_1 = 0; + svm->vmcb->control.exit_info_2 = 0; + + nested_svm_vmexit(svm); +} + + static int svm_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); bool block_nested_events = kvm_event_needs_reinjection(vcpu) || svm->nested.nested_run_pending; + struct kvm_lapic *apic = vcpu->arch.apic; + + if (lapic_in_kernel(vcpu) && + test_bit(KVM_APIC_INIT, &apic->pending_events)) { + if (block_nested_events) + return -EBUSY; + if (!nested_exit_on_init(svm)) + return 0; + nested_svm_init(svm); + return 0; + } if (vcpu->arch.exception.pending) { if (block_nested_events) From patchwork Wed May 20 17:21:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15526912 for ; Wed, 20 May 2020 17:24:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EEAF1207D3 for ; Wed, 20 May 2020 17:24:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VAD9UEg/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727944AbgETRYM (ORCPT ); Wed, 20 May 2020 13:24:12 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:26601 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727940AbgETRWB (ORCPT ); Wed, 20 May 2020 13:22:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Rs658D4BryPokbUe2jlTj8urayd8s1mPqmRjV+mKTDs=; b=VAD9UEg/iM6HxL43J6c+xnSLdJ1v+q9Ravz7jo+wmh6C4uCssf2/CUL749LAg7Rx5wcqnK Ip1BxB/CQYpVbjeafZdntepqEYUn347sx/9YyUDsB57AaNhKeAjUsZPgXMFayaBCRm5u2d MbLwMP85K0lA9frk2ZwBskhJpL7aQgM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-309-QInejBBJOQ-V2zZBfarM_g-1; Wed, 20 May 2020 13:21:58 -0400 X-MC-Unique: QInejBBJOQ-V2zZBfarM_g-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A9C20107B474; Wed, 20 May 2020 17:21:52 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id E250682A33; Wed, 20 May 2020 17:21:51 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 06/24] KVM: nSVM: move map argument out of enter_svm_guest_mode Date: Wed, 20 May 2020 13:21:27 -0400 Message-Id: <20200520172145.23284-7-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unmapping the nested VMCB in enter_svm_guest_mode is a bit of a wart, since the map is not used elsewhere in the function. There are just two calls, so move it there. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 14 ++++++-------- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 7efefceb5f2f..083f11d5e3fa 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -229,7 +229,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb, struct kvm_host_map *map) + struct vmcb *nested_vmcb) { bool evaluate_pending_interrupts = is_intercept(svm, INTERCEPT_VINTR) || @@ -308,8 +308,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.pause_filter_thresh = nested_vmcb->control.pause_filter_thresh; - kvm_vcpu_unmap(&svm->vcpu, map, true); - /* Enter Guest-Mode */ enter_guest_mode(&svm->vcpu); @@ -372,10 +370,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_code_hi = 0; nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - - kvm_vcpu_unmap(&svm->vcpu, &map, true); - - return ret; + goto out; } trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, @@ -418,7 +413,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); svm->nested.nested_run_pending = 1; - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, &map); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb); if (!nested_svm_vmrun_msrpm(svm)) { svm->vmcb->control.exit_code = SVM_EXIT_ERR; @@ -429,6 +424,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm) nested_svm_vmexit(svm); } +out: + kvm_vcpu_unmap(&svm->vcpu, &map, true); + return ret; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 04332d0efa5f..47c565338426 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3820,7 +3820,8 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb), &map) == -EINVAL) return 1; nested_vmcb = map.hva; - enter_svm_guest_mode(svm, vmcb, nested_vmcb, &map); + enter_svm_guest_mode(svm, vmcb, nested_vmcb); + kvm_vcpu_unmap(&svm->vcpu, &map, true); } return 0; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 89fab75dd4f5..33e3f09d7a8e 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -395,7 +395,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) } void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb, struct kvm_host_map *map); + struct vmcb *nested_vmcb); int nested_svm_vmrun(struct vcpu_svm *svm); void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); int nested_svm_vmexit(struct vcpu_svm *svm); From patchwork Wed May 20 17:21:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560915 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E144B913 for ; Wed, 20 May 2020 17:23:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA416207D3 for ; Wed, 20 May 2020 17:23:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HaPWczD7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728327AbgETRXu (ORCPT ); Wed, 20 May 2020 13:23:50 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:21613 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727987AbgETRWF (ORCPT ); Wed, 20 May 2020 13:22:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=CsggIm2JXx6N5X0Cx+mQFusEbTz4B2QIzPuOWDri1WQ=; b=HaPWczD7Ocu6PLZk0lSQEyoUZDbJcyIuR7jDfza0NZgU2ZnSLKy2ILbTDBED2siR+yoNkD K2z3hYnwE3l/BdrB3rJOOacMZHqUhdouM90Y1ith/Z5bNSFt8nsIY9R6T21WdLnEdzzUmI xFgaqSdC7WggdPxHTdy1okst8xp81rg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-365-aGEeDlfdNjC4vW6vRD9REA-1; Wed, 20 May 2020 13:21:58 -0400 X-MC-Unique: aGEeDlfdNjC4vW6vRD9REA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 595381085939; Wed, 20 May 2020 17:21:53 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id C680682A33; Wed, 20 May 2020 17:21:52 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 07/24] KVM: nSVM: extract load_nested_vmcb_control Date: Wed, 20 May 2020 13:21:28 -0400 Message-Id: <20200520172145.23284-8-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When restoring SVM nested state, the control state cache in svm->nested will have to be filled, but the save state will not have to be moved into svm->vmcb. Therefore, pull the code that handles the control area into a separate function. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 38 ++++++++++++++++++++++---------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 083f11d5e3fa..b1d0d0519664 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -228,6 +228,23 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) return true; } +static void load_nested_vmcb_control(struct vcpu_svm *svm, + struct vmcb_control_area *control) +{ + svm->nested.nested_cr3 = control->nested_cr3; + + svm->nested.vmcb_msrpm = control->msrpm_base_pa & ~0x0fffULL; + svm->nested.vmcb_iopm = control->iopm_base_pa & ~0x0fffULL; + + /* cache intercepts */ + svm->nested.intercept_cr = control->intercept_cr; + svm->nested.intercept_dr = control->intercept_dr; + svm->nested.intercept_exceptions = control->intercept_exceptions; + svm->nested.intercept = control->intercept; + + svm->vcpu.arch.tsc_offset += control->tsc_offset; +} + void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb) { @@ -235,15 +252,16 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, is_intercept(svm, INTERCEPT_VINTR) || is_intercept(svm, INTERCEPT_IRET); + svm->nested.vmcb = vmcb_gpa; if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) svm->vcpu.arch.hflags |= HF_HIF_MASK; else svm->vcpu.arch.hflags &= ~HF_HIF_MASK; - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) { - svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3; + load_nested_vmcb_control(svm, &nested_vmcb->control); + + if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) nested_svm_init_mmu_context(&svm->vcpu); - } /* Load the nested guest state */ svm->vmcb->save.es = nested_vmcb->save.es; @@ -278,25 +296,15 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; svm->vmcb->save.cpl = nested_vmcb->save.cpl; - svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL; - svm->nested.vmcb_iopm = nested_vmcb->control.iopm_base_pa & ~0x0fffULL; - - /* cache intercepts */ - svm->nested.intercept_cr = nested_vmcb->control.intercept_cr; - svm->nested.intercept_dr = nested_vmcb->control.intercept_dr; - svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions; - svm->nested.intercept = nested_vmcb->control.intercept; - svm_flush_tlb(&svm->vcpu); - svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) svm->vcpu.arch.hflags |= HF_VINTR_MASK; else svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; - svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset; + svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext; svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; svm->vmcb->control.int_state = nested_vmcb->control.int_state; @@ -317,8 +325,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, */ recalc_intercepts(svm); - svm->nested.vmcb = vmcb_gpa; - /* * If L1 had a pending IRQ/NMI before executing VMRUN, * which wasn't delivered because it was disallowed (e.g. From patchwork Wed May 20 17:21:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A746913 for ; Wed, 20 May 2020 17:23:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7BEE207D3 for ; Wed, 20 May 2020 17:23:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TsK2xj19" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728081AbgETRWN (ORCPT ); Wed, 20 May 2020 13:22:13 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:26617 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728028AbgETRWM (ORCPT ); Wed, 20 May 2020 13:22:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995330; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=k7Sc93Zn4QsH2GU65eODDtauPatAqQ0wxlzFfaptHRM=; b=TsK2xj19gTApN5kOa663XZduTN30p5L3LNMyx1Qu4gMuMGn5Ip1wZ40kiPkBMoKtfK51Jw 1OGD/QMyZ0n7J+FoqPO6l71puHoLYSDpGlC9bTBgJPEMA6hm4yKVuHMBNvteLS+BpioX1h arm3hv6sxW0Y3V2VrYTl0rUbxd5INZQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-504-qj4_7bDCOsuxnDdb_ElROQ-1; Wed, 20 May 2020 13:22:07 -0400 X-MC-Unique: qj4_7bDCOsuxnDdb_ElROQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 43A061855A21; Wed, 20 May 2020 17:21:57 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 75B2482A33; Wed, 20 May 2020 17:21:53 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 08/24] KVM: nSVM: extract preparation of VMCB for nested run Date: Wed, 20 May 2020 13:21:29 -0400 Message-Id: <20200520172145.23284-9-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out filling svm->vmcb.save and svm->vmcb.control before VMRUN. Only the latter will be useful when restoring nested SVM state. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 52 ++++++++++++++++++++++----------------- 1 file changed, 30 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b1d0d0519664..4f81c2196bf6 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -245,24 +245,8 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, svm->vcpu.arch.tsc_offset += control->tsc_offset; } -void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb) +static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) { - bool evaluate_pending_interrupts = - is_intercept(svm, INTERCEPT_VINTR) || - is_intercept(svm, INTERCEPT_IRET); - - svm->nested.vmcb = vmcb_gpa; - if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) - svm->vcpu.arch.hflags |= HF_HIF_MASK; - else - svm->vcpu.arch.hflags &= ~HF_HIF_MASK; - - load_nested_vmcb_control(svm, &nested_vmcb->control); - - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) - nested_svm_init_mmu_context(&svm->vcpu); - /* Load the nested guest state */ svm->vmcb->save.es = nested_vmcb->save.es; svm->vmcb->save.cs = nested_vmcb->save.cs; @@ -280,9 +264,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, } else (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); - /* Guest paging mode is active - reset mmu */ - kvm_mmu_reset_context(&svm->vcpu); - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp); @@ -295,6 +276,15 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->save.dr7 = nested_vmcb->save.dr7; svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; svm->vmcb->save.cpl = nested_vmcb->save.cpl; +} + +static void nested_prepare_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +{ + if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) + nested_svm_init_mmu_context(&svm->vcpu); + + /* Guest paging mode is active - reset mmu */ + kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) @@ -325,6 +315,26 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, */ recalc_intercepts(svm); + mark_all_dirty(svm->vmcb); +} + +void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, + struct vmcb *nested_vmcb) +{ + bool evaluate_pending_interrupts = + is_intercept(svm, INTERCEPT_VINTR) || + is_intercept(svm, INTERCEPT_IRET); + + svm->nested.vmcb = vmcb_gpa; + if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) + svm->vcpu.arch.hflags |= HF_HIF_MASK; + else + svm->vcpu.arch.hflags &= ~HF_HIF_MASK; + + load_nested_vmcb_control(svm, &nested_vmcb->control); + nested_prepare_vmcb_save(svm, nested_vmcb); + nested_prepare_vmcb_control(svm, nested_vmcb); + /* * If L1 had a pending IRQ/NMI before executing VMRUN, * which wasn't delivered because it was disallowed (e.g. @@ -340,8 +350,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, enable_gif(svm); if (unlikely(evaluate_pending_interrupts)) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); - - mark_all_dirty(svm->vmcb); } int nested_svm_vmrun(struct vcpu_svm *svm) From patchwork Wed May 20 17:21:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BB9A912 for ; Wed, 20 May 2020 17:23:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC59D207D3 for ; Wed, 20 May 2020 17:23:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="B+YdVMjs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727966AbgETRWM (ORCPT ); Wed, 20 May 2020 13:22:12 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:25643 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728007AbgETRWH (ORCPT ); Wed, 20 May 2020 13:22:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Jil/RFj0RTK+aVNqnSvLRnL3kfX+SWb+xHtGZMZ+Xnk=; b=B+YdVMjsAGav5bo3WwIEkM0QPR3R8FsFuNoK4hKzC55nm4JgHJRIzB6OoqYjDMRKSb2uxp aea8bhkmD6A+gbTu0MhXdjP8O5/+9YTdM6nR/JN7fKmgFlVJGomADnOxRRGdDV0Rkm/9nA Mk9KGBZ0YNPe+mUpa93RZeNB9PTeGOI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-143-jDjkQTl3OHaL_WsEyO9qWQ-1; Wed, 20 May 2020 13:22:04 -0400 X-MC-Unique: jDjkQTl3OHaL_WsEyO9qWQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C8ABF106B21C; Wed, 20 May 2020 17:21:57 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 606B160610; Wed, 20 May 2020 17:21:57 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 09/24] KVM: nSVM: clean up tsc_offset update Date: Wed, 20 May 2020 13:21:30 -0400 Message-Id: <20200520172145.23284-10-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use l1_tsc_offset to compute svm->vcpu.arch.tsc_offset and svm->vmcb->control.tsc_offset, instead of relying on hsave. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 4f81c2196bf6..2aaa539482ae 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -241,8 +241,6 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, svm->nested.intercept_dr = control->intercept_dr; svm->nested.intercept_exceptions = control->intercept_exceptions; svm->nested.intercept = control->intercept; - - svm->vcpu.arch.tsc_offset += control->tsc_offset; } static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) @@ -292,7 +290,8 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm, struct vmcb *neste else svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; - svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset; + svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = + svm->vcpu.arch.l1_tsc_offset + nested_vmcb->control.tsc_offset; svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext; @@ -557,7 +556,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm) /* Restore the original control entries */ copy_vmcb_control_area(vmcb, hsave); - svm->vcpu.arch.tsc_offset = svm->vmcb->control.tsc_offset; + svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = + svm->vcpu.arch.l1_tsc_offset; + kvm_clear_exception_queue(&svm->vcpu); kvm_clear_interrupt_queue(&svm->vcpu); From patchwork Wed May 20 17:21:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560903 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65F8A14C0 for ; Wed, 20 May 2020 17:23:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4146120873 for ; Wed, 20 May 2020 17:23:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LhZnCDQ0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728316AbgETRX1 (ORCPT ); Wed, 20 May 2020 13:23:27 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:41194 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728088AbgETRWP (ORCPT ); Wed, 20 May 2020 13:22:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=kz4tq35xt3C9g7L/W0gZ+sTXPcxeT9hQCry1awKznYk=; b=LhZnCDQ0rS+MYEubeJ7ljtVKn2fMVuACn3gbGPpw0HsmkvnYHttuEIfu/tP99exFZ5B4fb UMlxVHxYD7ODEI06966t7ZRmLib0xYDlp3JL4Pv1qvjZJZZCMyTwkWXuIwSiHnGGKWZlmA Hm7Z0XT5L3Hhty2zsBjDl1O0eY6zHUE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-333-ruZB3uBTOGegCc9Ea_tGXQ-1; Wed, 20 May 2020 13:22:08 -0400 X-MC-Unique: ruZB3uBTOGegCc9Ea_tGXQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2DADF8B5E70; Wed, 20 May 2020 17:21:59 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 98C9479C3F; Wed, 20 May 2020 17:21:58 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 10/24] KVM: nSVM: pass vmcb_control_area to copy_vmcb_control_area Date: Wed, 20 May 2020 13:21:31 -0400 Message-Id: <20200520172145.23284-11-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will come in handy when we put a struct vmcb_control_area in svm->nested. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 2aaa539482ae..c759124ed6af 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -141,11 +141,9 @@ void recalc_intercepts(struct vcpu_svm *svm) c->intercept |= g->intercept; } -static void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *from_vmcb) +static void copy_vmcb_control_area(struct vmcb_control_area *dst, + struct vmcb_control_area *from) { - struct vmcb_control_area *dst = &dst_vmcb->control; - struct vmcb_control_area *from = &from_vmcb->control; - dst->intercept_cr = from->intercept_cr; dst->intercept_dr = from->intercept_dr; dst->intercept_exceptions = from->intercept_exceptions; @@ -423,7 +421,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) else hsave->save.cr3 = kvm_read_cr3(&svm->vcpu); - copy_vmcb_control_area(hsave, vmcb); + copy_vmcb_control_area(&hsave->control, &vmcb->control); svm->nested.nested_run_pending = 1; enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb); @@ -554,7 +552,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; /* Restore the original control entries */ - copy_vmcb_control_area(vmcb, hsave); + copy_vmcb_control_area(&vmcb->control, &hsave->control); svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = svm->vcpu.arch.l1_tsc_offset; From patchwork Wed May 20 17:21:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560879 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 232F1912 for ; Wed, 20 May 2020 17:22:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A4992075F for ; Wed, 20 May 2020 17:22:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bPpms5zM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728003AbgETRWF (ORCPT ); Wed, 20 May 2020 13:22:05 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:51725 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727964AbgETRWE (ORCPT ); Wed, 20 May 2020 13:22:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=8SK9YQtue71WyQs46tC99v9KZ5RewVoWZvvkNYRnCm4=; b=bPpms5zMvj68NgiM9D0VWHyNo8+xqJEYl0+YQWDFwSGSMkBzPqv5Uz5UV1lj7/LQsHKOU+ sFQEV8m1cVX9Off4J8IcgC1UnnDjTTq6QdGFpvSmsmWSkSEqXxi9AYGIJCXtP+3xTWM0/E G/GVNNkVqAyytpTVRqNowYBMwpn25UM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-501-VyH9usssMAayu-jCYhDgqQ-1; Wed, 20 May 2020 13:22:01 -0400 X-MC-Unique: VyH9usssMAayu-jCYhDgqQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED63C57093; Wed, 20 May 2020 17:21:59 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 538DA797EE; Wed, 20 May 2020 17:21:59 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 11/24] KVM: nSVM: remove trailing padding for struct vmcb_control_area Date: Wed, 20 May 2020 13:21:32 -0400 Message-Id: <20200520172145.23284-12-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allow placing the VMCB structs on the stack or in other structs without wasting too much space. Add BUILD_BUG_ON as a quick safeguard against typos. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/svm.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index 6ece8561ba66..8a1f5382a4ea 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -96,7 +96,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u8 reserved_6[8]; /* Offset 0xe8 */ u64 avic_logical_id; /* Offset 0xf0 */ u64 avic_physical_id; /* Offset 0xf8 */ - u8 reserved_7[768]; }; @@ -203,8 +202,16 @@ struct __attribute__ ((__packed__)) vmcb_save_area { u64 last_excp_to; }; + +static inline void __unused_size_checks(void) +{ + BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 0x298); + BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 256); +} + struct __attribute__ ((__packed__)) vmcb { struct vmcb_control_area control; + u8 reserved_control[1024 - sizeof(struct vmcb_control_area)]; struct vmcb_save_area save; }; From patchwork Wed May 20 17:21:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560899 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C46F14C0 for ; Wed, 20 May 2020 17:23:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F94D20849 for ; Wed, 20 May 2020 17:23:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TwitjXP0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728321AbgETRX2 (ORCPT ); Wed, 20 May 2020 13:23:28 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:53368 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728046AbgETRWO (ORCPT ); Wed, 20 May 2020 13:22:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995331; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=FMOPd6BM86UupJVQWE47ES4+9V7BqT84btawir6AkT4=; b=TwitjXP0O2t7mjZbl8xVdLG8rG8GJ1XP0F1Etp9zAFYnMPg9cs5wMH5Z2EdujGChiCkhM9 glMpEMH5Yr2tOzAzsZo3FbBptP7LWWTkDR8A+oQfBA/hrLOGKLva8/aY9XyUOqryC3X08r COuy2Q5d9d0XJtC7DzqTLKYtbLPzJUg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-82-U8HHjcNXMeKlqDaTwMRHyA-1; Wed, 20 May 2020 13:22:10 -0400 X-MC-Unique: U8HHjcNXMeKlqDaTwMRHyA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B5EEB8B7850; Wed, 20 May 2020 17:22:00 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 127CE70461; Wed, 20 May 2020 17:21:59 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 12/24] KVM: nSVM: save all control fields in svm->nested Date: Wed, 20 May 2020 13:21:33 -0400 Message-Id: <20200520172145.23284-13-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In preparation for nested SVM save/restore, store all data that matters from the VMCB control area into svm->nested. It will then become part of the nested SVM state that is saved by KVM_SET_NESTED_STATE and restored by KVM_GET_NESTED_STATE, just like the cached vmcs12 for nVMX. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 73 +++++++++++++++++---------------------- arch/x86/kvm/svm/svm.c | 10 ++++-- arch/x86/kvm/svm/svm.h | 20 +++-------- 3 files changed, 45 insertions(+), 58 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c759124ed6af..9999bce9adcf 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -60,7 +60,7 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) { struct vcpu_svm *svm = to_svm(vcpu); - u64 cr3 = svm->nested.nested_cr3; + u64 cr3 = svm->nested.ctl.nested_cr3; u64 pdpte; int ret; @@ -75,7 +75,7 @@ static unsigned long nested_svm_get_tdp_cr3(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - return svm->nested.nested_cr3; + return svm->nested.ctl.nested_cr3; } static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) @@ -100,8 +100,7 @@ static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu) void recalc_intercepts(struct vcpu_svm *svm) { - struct vmcb_control_area *c, *h; - struct nested_state *g; + struct vmcb_control_area *c, *h, *g; mark_dirty(svm->vmcb, VMCB_INTERCEPTS); @@ -110,7 +109,7 @@ void recalc_intercepts(struct vcpu_svm *svm) c = &svm->vmcb->control; h = &svm->nested.hsave->control; - g = &svm->nested; + g = &svm->nested.ctl; svm->nested.host_intercept_exceptions = h->intercept_exceptions; @@ -180,7 +179,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) */ int i; - if (!(svm->nested.intercept & (1ULL << INTERCEPT_MSR_PROT))) + if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_MSR_PROT))) return true; for (i = 0; i < MSRPM_OFFSETS; i++) { @@ -191,7 +190,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) break; p = msrpm_offsets[i]; - offset = svm->nested.vmcb_msrpm + (p * 4); + offset = svm->nested.ctl.msrpm_base_pa + (p * 4); if (kvm_vcpu_read_guest(&svm->vcpu, offset, &value, 4)) return false; @@ -229,16 +228,10 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) static void load_nested_vmcb_control(struct vcpu_svm *svm, struct vmcb_control_area *control) { - svm->nested.nested_cr3 = control->nested_cr3; + copy_vmcb_control_area(&svm->nested.ctl, control); - svm->nested.vmcb_msrpm = control->msrpm_base_pa & ~0x0fffULL; - svm->nested.vmcb_iopm = control->iopm_base_pa & ~0x0fffULL; - - /* cache intercepts */ - svm->nested.intercept_cr = control->intercept_cr; - svm->nested.intercept_dr = control->intercept_dr; - svm->nested.intercept_exceptions = control->intercept_exceptions; - svm->nested.intercept = control->intercept; + svm->nested.ctl.msrpm_base_pa &= ~0x0fffULL; + svm->nested.ctl.iopm_base_pa &= ~0x0fffULL; } static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) @@ -274,34 +267,32 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_v svm->vmcb->save.cpl = nested_vmcb->save.cpl; } -static void nested_prepare_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +static void nested_prepare_vmcb_control(struct vcpu_svm *svm) { - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) + if (svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) nested_svm_init_mmu_context(&svm->vcpu); /* Guest paging mode is active - reset mmu */ kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); - if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) + if (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) svm->vcpu.arch.hflags |= HF_VINTR_MASK; else svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = - svm->vcpu.arch.l1_tsc_offset + nested_vmcb->control.tsc_offset; + svm->vcpu.arch.l1_tsc_offset + svm->nested.ctl.tsc_offset; - svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; - svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext; - svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; - svm->vmcb->control.int_state = nested_vmcb->control.int_state; - svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; - svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; + svm->vmcb->control.int_ctl = svm->nested.ctl.int_ctl | V_INTR_MASKING_MASK; + svm->vmcb->control.virt_ext = svm->nested.ctl.virt_ext; + svm->vmcb->control.int_vector = svm->nested.ctl.int_vector; + svm->vmcb->control.int_state = svm->nested.ctl.int_state; + svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; + svm->vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; - svm->vmcb->control.pause_filter_count = - nested_vmcb->control.pause_filter_count; - svm->vmcb->control.pause_filter_thresh = - nested_vmcb->control.pause_filter_thresh; + svm->vmcb->control.pause_filter_count = svm->nested.ctl.pause_filter_count; + svm->vmcb->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh; /* Enter Guest-Mode */ enter_guest_mode(&svm->vcpu); @@ -330,7 +321,7 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, load_nested_vmcb_control(svm, &nested_vmcb->control); nested_prepare_vmcb_save(svm, nested_vmcb); - nested_prepare_vmcb_control(svm, nested_vmcb); + nested_prepare_vmcb_control(svm); /* * If L1 had a pending IRQ/NMI before executing VMRUN, @@ -560,7 +551,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) kvm_clear_exception_queue(&svm->vcpu); kvm_clear_interrupt_queue(&svm->vcpu); - svm->nested.nested_cr3 = 0; + svm->nested.ctl.nested_cr3 = 0; /* Restore selected save entries */ svm->vmcb->save.es = hsave->save.es; @@ -610,7 +601,7 @@ static int nested_svm_exit_handled_msr(struct vcpu_svm *svm) u32 offset, msr, value; int write, mask; - if (!(svm->nested.intercept & (1ULL << INTERCEPT_MSR_PROT))) + if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_MSR_PROT))) return NESTED_EXIT_HOST; msr = svm->vcpu.arch.regs[VCPU_REGS_RCX]; @@ -624,7 +615,7 @@ static int nested_svm_exit_handled_msr(struct vcpu_svm *svm) /* Offset is in 32 bit units but need in 8 bit units */ offset *= 4; - if (kvm_vcpu_read_guest(&svm->vcpu, svm->nested.vmcb_msrpm + offset, &value, 4)) + if (kvm_vcpu_read_guest(&svm->vcpu, svm->nested.ctl.msrpm_base_pa + offset, &value, 4)) return NESTED_EXIT_DONE; return (value & mask) ? NESTED_EXIT_DONE : NESTED_EXIT_HOST; @@ -637,13 +628,13 @@ static int nested_svm_intercept_ioio(struct vcpu_svm *svm) u8 start_bit; u64 gpa; - if (!(svm->nested.intercept & (1ULL << INTERCEPT_IOIO_PROT))) + if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_IOIO_PROT))) return NESTED_EXIT_HOST; port = svm->vmcb->control.exit_info_1 >> 16; size = (svm->vmcb->control.exit_info_1 & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT; - gpa = svm->nested.vmcb_iopm + (port / 8); + gpa = svm->nested.ctl.iopm_base_pa + (port / 8); start_bit = port % 8; iopm_len = (start_bit + size > 8) ? 2 : 1; mask = (0xf >> (4 - size)) << start_bit; @@ -669,13 +660,13 @@ static int nested_svm_intercept(struct vcpu_svm *svm) break; case SVM_EXIT_READ_CR0 ... SVM_EXIT_WRITE_CR8: { u32 bit = 1U << (exit_code - SVM_EXIT_READ_CR0); - if (svm->nested.intercept_cr & bit) + if (svm->nested.ctl.intercept_cr & bit) vmexit = NESTED_EXIT_DONE; break; } case SVM_EXIT_READ_DR0 ... SVM_EXIT_WRITE_DR7: { u32 bit = 1U << (exit_code - SVM_EXIT_READ_DR0); - if (svm->nested.intercept_dr & bit) + if (svm->nested.ctl.intercept_dr & bit) vmexit = NESTED_EXIT_DONE; break; } @@ -694,7 +685,7 @@ static int nested_svm_intercept(struct vcpu_svm *svm) } default: { u64 exit_bits = 1ULL << (exit_code - SVM_EXIT_INTR); - if (svm->nested.intercept & exit_bits) + if (svm->nested.ctl.intercept & exit_bits) vmexit = NESTED_EXIT_DONE; } } @@ -734,7 +725,7 @@ static bool nested_exit_on_exception(struct vcpu_svm *svm) { unsigned int nr = svm->vcpu.arch.exception.nr; - return (svm->nested.intercept_exceptions & (1 << nr)); + return (svm->nested.ctl.intercept_exceptions & (1 << nr)); } static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm) @@ -795,7 +786,7 @@ static void nested_svm_intr(struct vcpu_svm *svm) static inline bool nested_exit_on_init(struct vcpu_svm *svm) { - return (svm->nested.intercept & (1ULL << INTERCEPT_INIT)); + return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_INIT)); } static void nested_svm_init(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 47c565338426..ffb349739e0c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2173,7 +2173,7 @@ static bool check_selective_cr0_intercepted(struct vcpu_svm *svm, bool ret = false; u64 intercept; - intercept = svm->nested.intercept; + intercept = svm->nested.ctl.intercept; if (!is_guest_mode(&svm->vcpu) || (!(intercept & (1ULL << INTERCEPT_SELECTIVE_CR0)))) @@ -3320,6 +3320,12 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; + if (unlikely(svm->nested.nested_run_pending)) { + /* After this vmentry, these fields will be used up. */ + svm->nested.ctl.event_inj = 0; + svm->nested.ctl.event_inj_err = 0; + } + /* * Disable singlestep if we're injecting an interrupt/exception. * We don't want our modified rflags to be pushed on the stack where @@ -3655,7 +3661,7 @@ static int svm_check_intercept(struct kvm_vcpu *vcpu, info->intercept == x86_intercept_clts) break; - intercept = svm->nested.intercept; + intercept = svm->nested.ctl.intercept; if (!(intercept & (1ULL << INTERCEPT_SELECTIVE_CR0))) break; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 33e3f09d7a8e..dd5418f20256 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -91,22 +91,12 @@ struct nested_state { /* These are the merged vectors */ u32 *msrpm; - /* gpa pointers to the real vectors */ - u64 vmcb_msrpm; - u64 vmcb_iopm; - /* A VMRUN has started but has not yet been performed, so * we cannot inject a nested vmexit yet. */ bool nested_run_pending; - /* cache for intercepts of the guest */ - u32 intercept_cr; - u32 intercept_dr; - u32 intercept_exceptions; - u64 intercept; - - /* Nested Paging related state */ - u64 nested_cr3; + /* cache for control fields of the guest */ + struct vmcb_control_area ctl; }; struct vcpu_svm { @@ -381,17 +371,17 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) static inline bool nested_exit_on_smi(struct vcpu_svm *svm) { - return (svm->nested.intercept & (1ULL << INTERCEPT_SMI)); + return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_SMI)); } static inline bool nested_exit_on_intr(struct vcpu_svm *svm) { - return (svm->nested.intercept & (1ULL << INTERCEPT_INTR)); + return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_INTR)); } static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) { - return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); + return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_NMI)); } void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, From patchwork Wed May 20 17:21:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560901 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C2FF912 for ; Wed, 20 May 2020 17:23:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1AF622075F for ; Wed, 20 May 2020 17:23:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MIIm97Nw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728138AbgETRX0 (ORCPT ); Wed, 20 May 2020 13:23:26 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:53366 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728097AbgETRWP (ORCPT ); Wed, 20 May 2020 13:22:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Juz3RPZ1E1zRyR+zIZIUrmPTzlSyrk4ogMhENNPsAtc=; b=MIIm97Nw/ELgFWEMIaKZl0HdTSRUPwT8o5s+iUmSoxz2JAIZ49sFhMNmCxuHVhPuYq54yS /QqGf6RHiOBHgB2Ncz/LCn4y75Vrs7gxtPNVhMo8yItl84VNPNLpIEYPUPS4Ls7NdiQvHM 56BiEwdfWx6ThAzLnAvD8tMxX6M4Jro= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-511-GvKraeOsPRqgU6IBSguXNQ-1; Wed, 20 May 2020 13:22:10 -0400 X-MC-Unique: GvKraeOsPRqgU6IBSguXNQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6E1F28B785F; Wed, 20 May 2020 17:22:01 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id DC7E270461; Wed, 20 May 2020 17:22:00 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 13/24] KVM: nSVM: do not reload pause filter fields from VMCB Date: Wed, 20 May 2020 13:21:34 -0400 Message-Id: <20200520172145.23284-14-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These fields do not change from VMRUN to VMEXIT; there is no need to reload them on nested VMEXIT. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9999bce9adcf..b6a1e1ff271e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -533,11 +533,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.event_inj = 0; nested_vmcb->control.event_inj_err = 0; - nested_vmcb->control.pause_filter_count = - svm->vmcb->control.pause_filter_count; - nested_vmcb->control.pause_filter_thresh = - svm->vmcb->control.pause_filter_thresh; - /* We always set V_INTR_MASKING and remember the old value in hflags */ if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; From patchwork Wed May 20 17:21:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E52BC913 for ; Wed, 20 May 2020 17:23:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEAFB20899 for ; Wed, 20 May 2020 17:23:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="imh20zeq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728068AbgETRWN (ORCPT ); Wed, 20 May 2020 13:22:13 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:24508 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727819AbgETRWJ (ORCPT ); Wed, 20 May 2020 13:22:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995327; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=osxB5OVd+8V/5wuYjIIWK6D5QvfwU65EZsue7tkqQLQ=; b=imh20zeqakUzDU5Tpjmqs61XY0Bq8E04w4/zw69yGeU2AJ41CUI0u7Mzjwi+hQgfkDHWOO IeOBjljiX1H8MTND8HKEm8loNp/r0ZNMq5edU+AXtQLuINpaqUuuuDR+qp96ay+IbQffK4 v/pd8g/t6op0W94COxeHP2jNOecuGOY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-426-CtuufgVZNmS3Y9pTnZILuw-1; Wed, 20 May 2020 13:22:05 -0400 X-MC-Unique: CtuufgVZNmS3Y9pTnZILuw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 965A480B72E; Wed, 20 May 2020 17:22:03 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9362270461; Wed, 20 May 2020 17:22:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 14/24] KVM: nSVM: remove HF_VINTR_MASK Date: Wed, 20 May 2020 13:21:35 -0400 Message-Id: <20200520172145.23284-15-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the int_ctl field is stored in svm->nested.ctl.int_ctl, we can use it instead of vcpu->arch.hflags to check whether L2 is running in V_INTR_MASKING mode. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/svm/nested.c | 10 +++------- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 4 +++- 4 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index db261da578f3..f1c4cb2d0541 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1594,7 +1594,6 @@ enum { #define HF_GIF_MASK (1 << 0) #define HF_HIF_MASK (1 << 1) -#define HF_VINTR_MASK (1 << 2) #define HF_NMI_MASK (1 << 3) #define HF_IRET_MASK (1 << 4) #define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b6a1e1ff271e..9e4d24acadd7 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -118,7 +118,7 @@ void recalc_intercepts(struct vcpu_svm *svm) c->intercept_exceptions = h->intercept_exceptions; c->intercept = h->intercept; - if (svm->vcpu.arch.hflags & HF_VINTR_MASK) { + if (g->int_ctl & V_INTR_MASKING_MASK) { /* We only want the cr8 intercept bits of L1 */ c->intercept_cr &= ~(1U << INTERCEPT_CR8_READ); c->intercept_cr &= ~(1U << INTERCEPT_CR8_WRITE); @@ -276,10 +276,6 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); - if (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) - svm->vcpu.arch.hflags |= HF_VINTR_MASK; - else - svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = svm->vcpu.arch.l1_tsc_offset + svm->nested.ctl.tsc_offset; @@ -533,8 +529,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.event_inj = 0; nested_vmcb->control.event_inj_err = 0; - /* We always set V_INTR_MASKING and remember the old value in hflags */ - if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) + /* We always set V_INTR_MASKING and remember the old value in svm->nested */ + if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK)) nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; /* Restore the original control entries */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ffb349739e0c..10c80c13b679 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3080,7 +3080,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu)) { /* As long as interrupts are being delivered... */ - if ((svm->vcpu.arch.hflags & HF_VINTR_MASK) + if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) ? !(svm->vcpu.arch.hflags & HF_HIF_MASK) : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) return true; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index dd5418f20256..73986f96ba2a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -366,7 +366,9 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) { - return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK); + struct vcpu_svm *svm = to_svm(vcpu); + + return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK); } static inline bool nested_exit_on_smi(struct vcpu_svm *svm) From patchwork Wed May 20 17:21:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25747913 for ; Wed, 20 May 2020 17:23:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C1A0207ED for ; Wed, 20 May 2020 17:23:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GHc2S5SH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727772AbgETRXd (ORCPT ); Wed, 20 May 2020 13:23:33 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:25502 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728085AbgETRWO (ORCPT ); Wed, 20 May 2020 13:22:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995333; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=O5C/wpSFG+A10gW4g2b6rirce6pktuJyF39GSqi1oN0=; b=GHc2S5SHzhuV2Amk+g9EFMhCXllgna4KcGc4mlJxB9u3DT/0NLU6sQ40rM8cq2kmNiTgN4 Ef065IMAqIzZAwMviWkTbpnmdTtQC3mgnEZwSTWrmiq/pMuBwydEzPi7UKBRdVs+r2W14H zuAy9I+wXooVH958Gz3GD12CBVwGMoY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-439-s9k8H_HhOjmzYZiuQVsb3g-1; Wed, 20 May 2020 13:22:06 -0400 X-MC-Unique: s9k8H_HhOjmzYZiuQVsb3g-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E58E5835BB8; Wed, 20 May 2020 17:22:04 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id BBF3670461; Wed, 20 May 2020 17:22:03 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 15/24] KVM: nSVM: remove HF_HIF_MASK Date: Wed, 20 May 2020 13:21:36 -0400 Message-Id: <20200520172145.23284-16-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The L1 flags can be found in the save area of svm->nested.hsave, fish it from there so that there is one fewer thing to migrate. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/svm/nested.c | 5 ----- arch/x86/kvm/svm/svm.c | 2 +- 3 files changed, 1 insertion(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f1c4cb2d0541..019dafaddc1f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1593,7 +1593,6 @@ enum { }; #define HF_GIF_MASK (1 << 0) -#define HF_HIF_MASK (1 << 1) #define HF_NMI_MASK (1 << 3) #define HF_IRET_MASK (1 << 4) #define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 9e4d24acadd7..cbe96a08b080 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -310,11 +310,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, is_intercept(svm, INTERCEPT_IRET); svm->nested.vmcb = vmcb_gpa; - if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) - svm->vcpu.arch.hflags |= HF_HIF_MASK; - else - svm->vcpu.arch.hflags &= ~HF_HIF_MASK; - load_nested_vmcb_control(svm, &nested_vmcb->control); nested_prepare_vmcb_save(svm, nested_vmcb); nested_prepare_vmcb_control(svm); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 10c80c13b679..6a19cb3e3b1f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3081,7 +3081,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu)) { /* As long as interrupts are being delivered... */ if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) - ? !(svm->vcpu.arch.hflags & HF_HIF_MASK) + ? !(svm->nested.hsave->save.rflags & X86_EFLAGS_IF) : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) return true; From patchwork Wed May 20 17:21:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560885 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BC12913 for ; Wed, 20 May 2020 17:22:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BB4E207ED for ; Wed, 20 May 2020 17:22:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GvB36WyC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728259AbgETRWa (ORCPT ); Wed, 20 May 2020 13:22:30 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:33595 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728216AbgETRW3 (ORCPT ); Wed, 20 May 2020 13:22:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995348; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=lZ6+W3bZGAYBVzQ44Q5AecbK6sCDrUFzmYj52X2emww=; b=GvB36WyCxERgAYuM+onyE58ukO7abeUFb/fqaNAU6ERqmDiVaihvZ8pVus6OojFEhOKldo IpRoIs9mYoY1G7KVfbPsFBmx7pI7/MYO9wribphXaTsBN6zbcV4NLGCUGXpQGnIy3Z5GcE UCJjAh9W4TB3FRru35TA72iEGPk4JGw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-239-l6M-og0uPU2_iDDsCdYagA-1; Wed, 20 May 2020 13:22:19 -0400 X-MC-Unique: l6M-og0uPU2_iDDsCdYagA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0F7F18FF696; Wed, 20 May 2020 17:22:05 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 180B879589; Wed, 20 May 2020 17:22:05 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 16/24] KVM: nSVM: split nested_vmcb_check_controls Date: Wed, 20 May 2020 13:21:37 -0400 Message-Id: <20200520172145.23284-17-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The authoritative state does not come from the VMCB once in guest mode, but KVM_SET_NESTED_STATE can still perform checks on L1's provided SVM controls because we get them from userspace. Therefore, split out a function to do them. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index cbe96a08b080..024e27bebba3 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -203,26 +203,31 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) return true; } -static bool nested_vmcb_checks(struct vmcb *vmcb) +static bool nested_vmcb_check_controls(struct vmcb_control_area *control) { - if ((vmcb->save.efer & EFER_SVME) == 0) + if ((control->intercept & (1ULL << INTERCEPT_VMRUN)) == 0) return false; - if (((vmcb->save.cr0 & X86_CR0_CD) == 0) && - (vmcb->save.cr0 & X86_CR0_NW)) + if (control->asid == 0) return false; - if ((vmcb->control.intercept & (1ULL << INTERCEPT_VMRUN)) == 0) + if ((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) && + !npt_enabled) return false; - if (vmcb->control.asid == 0) + return true; +} + +static bool nested_vmcb_checks(struct vmcb *vmcb) +{ + if ((vmcb->save.efer & EFER_SVME) == 0) return false; - if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) && - !npt_enabled) + if (((vmcb->save.cr0 & X86_CR0_CD) == 0) && + (vmcb->save.cr0 & X86_CR0_NW)) return false; - return true; + return nested_vmcb_check_controls(&vmcb->control); } static void load_nested_vmcb_control(struct vcpu_svm *svm, From patchwork Wed May 20 17:21:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560897 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21B4A912 for ; Wed, 20 May 2020 17:23:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A4B620849 for ; Wed, 20 May 2020 17:23:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cKMNm7Xh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728319AbgETRX2 (ORCPT ); Wed, 20 May 2020 13:23:28 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:34636 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728099AbgETRWP (ORCPT ); Wed, 20 May 2020 13:22:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=v/fUARAthG/wKTNhu7AC326un8WX6UyBf3/Gu9fdswg=; b=cKMNm7XhCuBi+uvtBoUypNwwTYurkCffssnz0n6pQspKBKt7CGuq3QI93qGKLfAr7ax/VZ 9s194XgTY2kR11Mp9C6SBTelEFiRyvO6itIqM6Y/9EA8z/64qvB6EXiMZ4caTlq52xmpgv chC5g1V+asWEUsfQN55UplFyndsyq3c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-475-4FNQ00w-O_u5cmoec9ypVA-1; Wed, 20 May 2020 13:22:08 -0400 X-MC-Unique: 4FNQ00w-O_u5cmoec9ypVA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 555208463A3; Wed, 20 May 2020 17:22:07 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6E7170461; Wed, 20 May 2020 17:22:05 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 17/24] KVM: nSVM: do all MMU switch work in init/uninit functions Date: Wed, 20 May 2020 13:21:38 -0400 Message-Id: <20200520172145.23284-18-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the kvm_mmu_reset_context calls to nested_svm_init_mmu_context and nested_svm_uninit_mmu_context, so that the state of the MMU is consistent with the vcpu->arch.mmu and vcpu->arch.walk_mmu state. Remove an unnecessary kvm_mmu_load, which can wait until the first vcpu_run. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 024e27bebba3..54a3384a60f8 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -90,12 +90,17 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) vcpu->arch.mmu->shadow_root_level = vcpu->arch.tdp_level; reset_shadow_zero_bits_mask(vcpu, vcpu->arch.mmu); vcpu->arch.walk_mmu = &vcpu->arch.nested_mmu; + + /* Guest paging mode is active - reset mmu */ + kvm_mmu_reset_context(vcpu); } static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu) { vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; + + kvm_mmu_reset_context(vcpu); } void recalc_intercepts(struct vcpu_svm *svm) @@ -277,9 +282,6 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) if (svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) nested_svm_init_mmu_context(&svm->vcpu); - /* Guest paging mode is active - reset mmu */ - kvm_mmu_reset_context(&svm->vcpu); - svm_flush_tlb(&svm->vcpu); svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = @@ -573,8 +575,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) kvm_vcpu_unmap(&svm->vcpu, &map, true); nested_svm_uninit_mmu_context(&svm->vcpu); - kvm_mmu_reset_context(&svm->vcpu); - kvm_mmu_load(&svm->vcpu); /* * Drop what we picked up for L2 via svm_complete_interrupts() so it From patchwork Wed May 20 17:21:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92665912 for ; Wed, 20 May 2020 17:22:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DA1E207D8 for ; Wed, 20 May 2020 17:22:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UrkBlrwZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728231AbgETRW1 (ORCPT ); Wed, 20 May 2020 13:22:27 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:32823 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728190AbgETRWX (ORCPT ); Wed, 20 May 2020 13:22:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995342; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=X41LHQQcKbjTpdXv8eJXDQp0DsQyaAdM+H12qlhM3vA=; b=UrkBlrwZv1Vh4bxBl2S0t1mT26OWbd9ayVwow6l7wkVqDCGciTXdQ0YmVsGkm+phOFunBX gCxdVhGnt9NLI8tGRQc+orLmmNROp5fIgupoQoOIZ+vWQ2gew1vG0S64uCqE87mCddXBKK BV7cxCqcLoRfG3d/WnemcsdvmV1kwuk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-420-JDCW1XAFNv25UkGcl_XfGg-1; Wed, 20 May 2020 13:22:21 -0400 X-MC-Unique: JDCW1XAFNv25UkGcl_XfGg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9FBA1902ED3; Wed, 20 May 2020 17:22:08 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C47470461; Wed, 20 May 2020 17:22:07 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 18/24] KVM: nSVM: leave guest mode when clearing EFER.SVME Date: Wed, 20 May 2020 13:21:39 -0400 Message-Id: <20200520172145.23284-19-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to the AMD manual, the effect of turning off EFER.SVME while a guest is running is undefined. We make it leave guest mode immediately, similar to the effect of clearing the VMX bit in MSR_IA32_FEAT_CTL. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 16 ++++++++++++++++ arch/x86/kvm/svm/svm.c | 8 ++++++-- arch/x86/kvm/svm/svm.h | 1 + 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 54a3384a60f8..3e37410d0b94 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -587,6 +587,22 @@ int nested_svm_vmexit(struct vcpu_svm *svm) return 0; } +/* + * Forcibly leave nested mode in order to be able to reset the VCPU later on. + */ +void svm_leave_nested(struct vcpu_svm *svm) +{ + if (is_guest_mode(&svm->vcpu)) { + struct vmcb *hsave = svm->nested.hsave; + struct vmcb *vmcb = svm->vmcb; + + svm->nested.nested_run_pending = 0; + leave_guest_mode(&svm->vcpu); + copy_vmcb_control_area(&vmcb->control, &hsave->control); + nested_svm_uninit_mmu_context(&svm->vcpu); + } +} + static int nested_svm_exit_handled_msr(struct vcpu_svm *svm) { u32 offset, msr, value; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6a19cb3e3b1f..09b345892fc9 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -265,6 +265,7 @@ static int get_npt_level(struct kvm_vcpu *vcpu) void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) { + struct vcpu_svm *svm = to_svm(vcpu); vcpu->arch.efer = efer; if (!npt_enabled) { @@ -275,8 +276,11 @@ void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) efer &= ~EFER_LME; } - to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME; - mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR); + if (!(efer & EFER_SVME)) + svm_leave_nested(svm); + + svm->vmcb->save.efer = efer | EFER_SVME; + mark_dirty(svm->vmcb, VMCB_CR); } static int is_external_interrupt(u32 info) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 73986f96ba2a..4d57270cac3f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -388,6 +388,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb); +void svm_leave_nested(struct vcpu_svm *svm); int nested_svm_vmrun(struct vcpu_svm *svm); void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); int nested_svm_vmexit(struct vcpu_svm *svm); From patchwork Wed May 20 17:21:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560895 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CAE7913 for ; Wed, 20 May 2020 17:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FDFC20849 for ; Wed, 20 May 2020 17:23:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TqlxQ1g2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbgETRXV (ORCPT ); Wed, 20 May 2020 13:23:21 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:52428 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728101AbgETRWQ (ORCPT ); Wed, 20 May 2020 13:22:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995335; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=jrWjpTjo0hJ0e6dSEiSVY2FHCSjGptHFhRGJP2ZKJJk=; b=TqlxQ1g2eoRUnDFizmsN6gaC7izqFJAI6bjxC3BnahaXrKvGX0sZjcYvmH9he9WDfkyx8+ PIbBq2N6iocMumtoQQLcHiRuBK46k4c9M9hXEe9l1VZGf/1hr4amBTeVd2zguJFSgLwBR5 BxP+InJojaODcvSyB/qmzm+0eE8kNIU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-249-_SsBqXp-N7eYy7Xf57Uqig-1; Wed, 20 May 2020 13:22:11 -0400 X-MC-Unique: _SsBqXp-N7eYy7Xf57Uqig-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 20C578730A2; Wed, 20 May 2020 17:22:10 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1CF5E79C31; Wed, 20 May 2020 17:22:09 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 19/24] KVM: nSVM: extract svm_set_gif Date: Wed, 20 May 2020 13:21:40 -0400 Message-Id: <20200520172145.23284-20-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the code that is needed to implement CLGI and STGI, so that we can run it from VMRUN and vmexit (and in the future, KVM_SET_NESTED_STATE). Skip the request for KVM_REQ_EVENT unless needed, subsuming the evaluate_pending_interrupts optimization that is found in enter_svm_guest_mode. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/irq.c | 1 + arch/x86/kvm/svm/nested.c | 22 ++------------------ arch/x86/kvm/svm/svm.c | 44 +++++++++++++++++++++++---------------- arch/x86/kvm/svm/svm.h | 1 + 4 files changed, 30 insertions(+), 38 deletions(-) diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index 54f7ea68083b..99d118ffc67d 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -83,6 +83,7 @@ int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v) return kvm_apic_has_interrupt(v) != -1; /* LAPIC */ } +EXPORT_SYMBOL_GPL(kvm_cpu_has_injectable_intr); /* * check if there is pending interrupt without diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 3e37410d0b94..a4a9516ff8b5 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -312,30 +312,12 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb) { - bool evaluate_pending_interrupts = - is_intercept(svm, INTERCEPT_VINTR) || - is_intercept(svm, INTERCEPT_IRET); - svm->nested.vmcb = vmcb_gpa; load_nested_vmcb_control(svm, &nested_vmcb->control); nested_prepare_vmcb_save(svm, nested_vmcb); nested_prepare_vmcb_control(svm); - /* - * If L1 had a pending IRQ/NMI before executing VMRUN, - * which wasn't delivered because it was disallowed (e.g. - * interrupts disabled), L0 needs to evaluate if this pending - * event should cause an exit from L2 to L1 or be delivered - * directly to L2. - * - * Usually this would be handled by the processor noticing an - * IRQ/NMI window request. However, VMRUN can unblock interrupts - * by implicitly setting GIF, so force L0 to perform pending event - * evaluation by requesting a KVM_REQ_EVENT. - */ - enable_gif(svm); - if (unlikely(evaluate_pending_interrupts)) - kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); + svm_set_gif(svm, true); } int nested_svm_vmrun(struct vcpu_svm *svm) @@ -478,7 +460,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE; /* Give the current vmcb to the guest */ - disable_gif(svm); + svm_set_gif(svm, false); nested_vmcb->save.es = vmcb->save.es; nested_vmcb->save.cs = vmcb->save.cs; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 09b345892fc9..d8187d25fe04 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1977,6 +1977,30 @@ static int vmrun_interception(struct vcpu_svm *svm) return nested_svm_vmrun(svm); } +void svm_set_gif(struct vcpu_svm *svm, bool value) +{ + if (value) { + /* + * If VGIF is enabled, the STGI intercept is only added to + * detect the opening of the SMI/NMI window; remove it now. + */ + if (vgif_enabled(svm)) + clr_intercept(svm, INTERCEPT_STGI); + + enable_gif(svm); + if (svm->vcpu.arch.smi_pending || + svm->vcpu.arch.nmi_pending || + kvm_cpu_has_injectable_intr(&svm->vcpu)) + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); + } else { + disable_gif(svm); + + /* After a CLGI no interrupts should come */ + if (!kvm_vcpu_apicv_active(&svm->vcpu)) + svm_clear_vintr(svm); + } +} + static int stgi_interception(struct vcpu_svm *svm) { int ret; @@ -1984,18 +2008,8 @@ static int stgi_interception(struct vcpu_svm *svm) if (nested_svm_check_permissions(svm)) return 1; - /* - * If VGIF is enabled, the STGI intercept is only added to - * detect the opening of the SMI/NMI window; remove it now. - */ - if (vgif_enabled(svm)) - clr_intercept(svm, INTERCEPT_STGI); - ret = kvm_skip_emulated_instruction(&svm->vcpu); - kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); - - enable_gif(svm); - + svm_set_gif(svm, true); return ret; } @@ -2007,13 +2021,7 @@ static int clgi_interception(struct vcpu_svm *svm) return 1; ret = kvm_skip_emulated_instruction(&svm->vcpu); - - disable_gif(svm); - - /* After a CLGI no interrupts should come */ - if (!kvm_vcpu_apicv_active(&svm->vcpu)) - svm_clear_vintr(svm); - + svm_set_gif(svm, false); return ret; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4d57270cac3f..6733f9036499 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -357,6 +357,7 @@ void disable_nmi_singlestep(struct vcpu_svm *svm); bool svm_smi_blocked(struct kvm_vcpu *vcpu); bool svm_nmi_blocked(struct kvm_vcpu *vcpu); bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); +void svm_set_gif(struct vcpu_svm *svm, bool value); /* nested.c */ From patchwork Wed May 20 17:21:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560889 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D87AE912 for ; Wed, 20 May 2020 17:22:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C236F207D4 for ; Wed, 20 May 2020 17:22:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bTyigmeE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728225AbgETRW0 (ORCPT ); Wed, 20 May 2020 13:22:26 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:38000 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728188AbgETRWX (ORCPT ); Wed, 20 May 2020 13:22:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995342; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=lqgKVLP3SP8cg1s97CF9uJwnyFVUFcqxn/d9nKhT/9k=; b=bTyigmeEHRD3u5B4vOcBDicM2RaNpdktWI5edPRIJIhmOeAXvEk4ZzN968ZFJn6K2Fqccs lFPfkAZy8qeytLmCLiAaJhSC7ktQOOangWG/vtLAqYYk2Iuu4AWQQ2vVr7OpspvCjcujhq jfWXe5uyIT6eVYd5np/4vdclfTFGi60= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-319-jmPfhBa8NMKifqEFbtDKSw-1; Wed, 20 May 2020 13:22:20 -0400 X-MC-Unique: jmPfhBa8NMKifqEFbtDKSw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 88E5B87131F; Wed, 20 May 2020 17:22:11 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA0CC79C35; Wed, 20 May 2020 17:22:10 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 20/24] KVM: MMU: pass arbitrary CR0/CR4/EFER to kvm_init_shadow_mmu Date: Wed, 20 May 2020 13:21:41 -0400 Message-Id: <20200520172145.23284-21-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This allows fetching the registers from the hsave area when setting up the NPT shadow MMU, and is needed for KVM_SET_NESTED_STATE (which runs long after the CR0, CR4 and EFER values in vcpu have been switched to hold L2 guest state). Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/mmu/mmu.c | 14 +++++++++----- arch/x86/kvm/svm/nested.c | 5 ++++- 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 8a3b1bce722a..45c1ae872a34 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -57,7 +57,7 @@ void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots); -void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu); +void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool accessed_dirty, gpa_t new_eptp); bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d93cb3ad8f03..50ae99ee32df 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4954,7 +4954,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) return role; } -void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) +void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer) { struct kvm_mmu *context = vcpu->arch.mmu; union kvm_mmu_role new_role = @@ -4963,11 +4963,11 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) if (new_role.as_u64 == context->mmu_role.as_u64) return; - if (!is_paging(vcpu)) + if (!(cr0 & X86_CR0_PG)) nonpaging_init_context(vcpu, context); - else if (is_long_mode(vcpu)) + else if (efer & EFER_LMA) paging64_init_context(vcpu, context); - else if (is_pae(vcpu)) + else if (cr4 & X86_CR4_PAE) paging32E_init_context(vcpu, context); else paging32_init_context(vcpu, context); @@ -5045,7 +5045,11 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu) { struct kvm_mmu *context = vcpu->arch.mmu; - kvm_init_shadow_mmu(vcpu); + kvm_init_shadow_mmu(vcpu, + kvm_read_cr0_bits(vcpu, X86_CR0_PG), + kvm_read_cr4_bits(vcpu, X86_CR4_PAE), + vcpu->arch.efer); + context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index a4a9516ff8b5..19b6a7c954e8 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -80,10 +80,13 @@ static unsigned long nested_svm_get_tdp_cr3(struct kvm_vcpu *vcpu) static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm = to_svm(vcpu); + struct vmcb *hsave = svm->nested.hsave; + WARN_ON(mmu_is_nested(vcpu)); vcpu->arch.mmu = &vcpu->arch.guest_mmu; - kvm_init_shadow_mmu(vcpu); + kvm_init_shadow_mmu(vcpu, X86_CR0_PG, hsave->save.cr4, hsave->save.efer); vcpu->arch.mmu->get_guest_pgd = nested_svm_get_tdp_cr3; vcpu->arch.mmu->get_pdptr = nested_svm_get_tdp_pdptr; vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit; From patchwork Wed May 20 17:21:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560893 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B5A1913 for ; Wed, 20 May 2020 17:22:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62E60207D4 for ; Wed, 20 May 2020 17:22:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hvhq1ZBF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728284AbgETRWv (ORCPT ); Wed, 20 May 2020 13:22:51 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:31373 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728201AbgETRWZ (ORCPT ); Wed, 20 May 2020 13:22:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=sA7KU69CrSpCKwMi1qZQq/oq6j1VkGR88tbNNKk0nb4=; b=hvhq1ZBFi1SZTsZmZcVlrX1i3lC2pmJp1wFJ4NyprIKCtB1KGfG09EFqSLO2rSQloVNQdO A67PqpQ5W2UT9aD1fJy81orgPAUjPP9yuJSbM/NXJclVDnfGOHso/m7+bWC8ZPakW7apTv Sy0fsIhkHN9IN0jlDiEObLIqQS2mcPo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-286-fKrAhJzsNRe3nmIDSTmG8Q-1; Wed, 20 May 2020 13:22:17 -0400 X-MC-Unique: fKrAhJzsNRe3nmIDSTmG8Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B50AA1009639; Wed, 20 May 2020 17:22:12 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD66760554; Wed, 20 May 2020 17:22:11 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 21/24] KVM: x86: always update CR3 in VMCB Date: Wed, 20 May 2020 13:21:42 -0400 Message-Id: <20200520172145.23284-22-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vmx_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02 as an optimization, but this is only correct before the nested vmentry. If userspace is modifying CR3 with KVM_SET_SREGS after the VM has already been put in guest mode, the value of CR3 will not be updated. Remove the optimization, which almost never triggers anyway. This also applies to SVM, where the code was added in commit 689f3bf21628 ("KVM: x86: unify callbacks to load paging root", 2020-03-16) just to keep the two vendor-specific modules closer. Fixes: 04f11ef45810 ("KVM: nVMX: Always write vmcs02.GUEST_CR3 during nested VM-Enter") Fixes: 689f3bf21628 ("KVM: x86: unify callbacks to load paging root") Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 6 +----- arch/x86/kvm/svm/svm.c | 16 +++++----------- arch/x86/kvm/vmx/vmx.c | 5 +---- 3 files changed, 7 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 19b6a7c954e8..087a04ae74e4 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -260,11 +260,7 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_v svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); - if (npt_enabled) { - svm->vmcb->save.cr3 = nested_vmcb->save.cr3; - svm->vcpu.arch.cr3 = nested_vmcb->save.cr3; - } else - (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); + (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d8187d25fe04..56be704ffe95 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3465,7 +3465,6 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root) { struct vcpu_svm *svm = to_svm(vcpu); - bool update_guest_cr3 = true; unsigned long cr3; cr3 = __sme_set(root); @@ -3474,18 +3473,13 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root) mark_dirty(svm->vmcb, VMCB_NPT); /* Loading L2's CR3 is handled by enter_svm_guest_mode. */ - if (is_guest_mode(vcpu)) - update_guest_cr3 = false; - else if (test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail)) - cr3 = vcpu->arch.cr3; - else /* CR3 is already up-to-date. */ - update_guest_cr3 = false; + if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail)) + return; + cr3 = vcpu->arch.cr3; } - if (update_guest_cr3) { - svm->vmcb->save.cr3 = cr3; - mark_dirty(svm->vmcb, VMCB_CR); - } + svm->vmcb->save.cr3 = cr3; + mark_dirty(svm->vmcb, VMCB_CR); } static int is_disabled(void) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 55712dd86baf..7daf6a50e774 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3085,10 +3085,7 @@ void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd) spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock); } - /* Loading vmcs02.GUEST_CR3 is handled by nested VM-Enter. */ - if (is_guest_mode(vcpu)) - update_guest_cr3 = false; - else if (!enable_unrestricted_guest && !is_paging(vcpu)) + if (!enable_unrestricted_guest && !is_paging(vcpu)) guest_cr3 = to_kvm_vmx(kvm)->ept_identity_map_addr; else if (test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail)) guest_cr3 = vcpu->arch.cr3; From patchwork Wed May 20 17:21:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560891 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3DF2913 for ; Wed, 20 May 2020 17:22:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D4EE207D3 for ; Wed, 20 May 2020 17:22:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GFMuHPZv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728206AbgETRWt (ORCPT ); Wed, 20 May 2020 13:22:49 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:20122 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728213AbgETRW0 (ORCPT ); Wed, 20 May 2020 13:22:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995344; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=yRQKE5e2wt/obw3+N38gkuEi0HDMFjMCRqa7dn5fQ8g=; b=GFMuHPZv6PAx2oRyaC5PdAcXlR5fiFvETluInCWzA7bpfx6S+KIKzuoXwB8kaaF347muFA W8KZHwtVz1bh0YT2DbvSQ7mnYpmEYUZAoRrzGx7wdQvl6QrFArQ0mvsHELxiDAlPIbL4Ix MU7ZfFel7R2Uor/IWjTwADepBXFYNcw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-18-cUNZzNMVNPm2llGsZ97AYQ-1; Wed, 20 May 2020 13:22:22 -0400 X-MC-Unique: cUNZzNMVNPm2llGsZ97AYQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C617385B690; Wed, 20 May 2020 17:22:13 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id DA32C60554; Wed, 20 May 2020 17:22:12 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 22/24] uaccess: add memzero_user Date: Wed, 20 May 2020 13:21:43 -0400 Message-Id: <20200520172145.23284-23-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be used from KVM. Add it to lib/ so that everyone can use it. Signed-off-by: Paolo Bonzini --- include/linux/uaccess.h | 1 + lib/usercopy.c | 63 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 67f016010aad..bd8c85b50e67 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -232,6 +232,7 @@ __copy_from_user_inatomic_nocache(void *to, const void __user *from, #endif /* ARCH_HAS_NOCACHE_UACCESS */ extern __must_check int check_zeroed_user(const void __user *from, size_t size); +extern __must_check int memzero_user(void __user *from, size_t size); /** * copy_struct_from_user: copy a struct from userspace diff --git a/lib/usercopy.c b/lib/usercopy.c index cbb4d9ec00f2..82997862bf02 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -33,6 +33,69 @@ unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) EXPORT_SYMBOL(_copy_to_user); #endif +/** + * memzero_user: write zero bytes to a userspace buffer + * @from: Source address, in userspace. + * @size: Size of buffer. + * + * This is effectively shorthand for "memset(from, 0, size)" for + * userspace addresses. + * + * Returns: + * * 0: zeroes have been written to the buffer + * * -EFAULT: access to userspace failed. + */ +int memzero_user(void __user *from, size_t size) +{ + unsigned long val = 0; + unsigned long mask = 0; + uintptr_t align = (uintptr_t) from % sizeof(unsigned long); + + if (unlikely(size == 0)) + return 0; + + from -= align; + size += align; + + if (!user_access_begin(from, ALIGN_UP(size, sizeof(unsigned long)))) + return -EFAULT; + + if (align) { + unsafe_get_user(val, (unsigned long __user *) from, err_fault); + /* Prepare a mask to keep the first "align" bytes. */ + mask = aligned_byte_mask(align); + } + + if (size >= sizeof(unsigned long)) { + /* The mask only applies to the first full word. */ + val &= mask; + mask = 0; + do { + unsafe_put_user(val, (unsigned long __user *) from, err_fault); + from += sizeof(unsigned long); + size -= sizeof(unsigned long); + val = 0; + } while (size >= sizeof(unsigned long)); + + if (!size) + goto done; + unsafe_get_user(val, (unsigned long __user *) from, err_fault); + } + + /* Bytes after the first "size" have to be kept too. */ + mask |= ~aligned_byte_mask(size); + val &= mask; + unsafe_put_user(val, (unsigned long __user *) from, err_fault); + +done: + user_access_end(); + return 0; +err_fault: + user_access_end(); + return -EFAULT; +} +EXPORT_SYMBOL(memzero_user); + /** * check_zeroed_user: check if a userspace buffer only contains zero bytes * @from: Source address, in userspace. From patchwork Wed May 20 17:21:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560881 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE152913 for ; Wed, 20 May 2020 17:22:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6A81208A7 for ; Wed, 20 May 2020 17:22:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="f7lF5HkZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728178AbgETRWV (ORCPT ); Wed, 20 May 2020 13:22:21 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:46267 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728053AbgETRWV (ORCPT ); Wed, 20 May 2020 13:22:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995339; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=TsINEXVSee1HllxBF910DzUXLat9UmMiPsxx+GcKTio=; b=f7lF5HkZVUgrQDw4cNPXqu818trWYzB8x0NZGQ77re8ccQ+c/lyAPtNMjU/QhnBO9YQOYO 03tiyntjRF7HRuL2Aj83YygefK6e++0krJ3r3b5iumHPZZawAbsUl1XCrpev86zmrdTxLR S5mCjz2Kza9vUdat0D5UFrEGONen12s= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-42-jFi2HqyUNf-Oqde49Zk62w-1; Wed, 20 May 2020 13:22:15 -0400 X-MC-Unique: jFi2HqyUNf-Oqde49Zk62w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D7080107BEF5; Wed, 20 May 2020 17:22:14 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id EAE4C60554; Wed, 20 May 2020 17:22:13 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 23/24] selftests: kvm: add a SVM version of state-test Date: Wed, 20 May 2020 13:21:44 -0400 Message-Id: <20200520172145.23284-24-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The test is similar to the existing one for VMX, but simpler because we don't have to test shadow VMCS or vmptrld/vmptrst/vmclear. Signed-off-by: Paolo Bonzini --- .../testing/selftests/kvm/x86_64/state_test.c | 69 +++++++++++++++---- 1 file changed, 57 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/state_test.c b/tools/testing/selftests/kvm/x86_64/state_test.c index 5b1a016edf55..af8b6df6a13e 100644 --- a/tools/testing/selftests/kvm/x86_64/state_test.c +++ b/tools/testing/selftests/kvm/x86_64/state_test.c @@ -18,14 +18,46 @@ #include "kvm_util.h" #include "processor.h" #include "vmx.h" +#include "svm_util.h" #define VCPU_ID 5 +#define L2_GUEST_STACK_SIZE 256 -void l2_guest_code(void) +void svm_l2_guest_code(void) +{ + GUEST_SYNC(4); + /* Exit to L1 */ + vmcall(); + GUEST_SYNC(6); + /* Done, exit to L1 and never come back. */ + vmcall(); +} + +static void svm_l1_guest_code(struct svm_test_data *svm) +{ + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + struct vmcb *vmcb = svm->vmcb; + + GUEST_ASSERT(svm->vmcb_gpa); + /* Prepare for L2 execution. */ + generic_svm_setup(svm, svm_l2_guest_code, + &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + GUEST_SYNC(3); + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL); + GUEST_SYNC(5); + vmcb->save.rip += 3; + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL); + GUEST_SYNC(7); +} + +void vmx_l2_guest_code(void) { GUEST_SYNC(6); - /* Exit to L1 */ + /* Exit to L1 */ vmcall(); /* L1 has now set up a shadow VMCS for us. */ @@ -42,10 +74,9 @@ void l2_guest_code(void) vmcall(); } -void l1_guest_code(struct vmx_pages *vmx_pages) +static void vmx_l1_guest_code(struct vmx_pages *vmx_pages) { -#define L2_GUEST_STACK_SIZE 64 - unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; GUEST_ASSERT(vmx_pages->vmcs_gpa); GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); @@ -56,7 +87,7 @@ void l1_guest_code(struct vmx_pages *vmx_pages) GUEST_SYNC(4); GUEST_ASSERT(vmptrstz() == vmx_pages->vmcs_gpa); - prepare_vmcs(vmx_pages, l2_guest_code, + prepare_vmcs(vmx_pages, vmx_l2_guest_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); GUEST_SYNC(5); @@ -106,20 +137,31 @@ void l1_guest_code(struct vmx_pages *vmx_pages) GUEST_ASSERT(vmresume()); } -void guest_code(struct vmx_pages *vmx_pages) +static u32 cpuid_ecx(u32 eax) +{ + u32 ecx; + asm volatile("cpuid" : "=a" (eax), "=c" (ecx) : "0" (eax) : "ebx", "edx"); + return ecx; +} + +static void __attribute__((__flatten__)) guest_code(void *arg) { GUEST_SYNC(1); GUEST_SYNC(2); - if (vmx_pages) - l1_guest_code(vmx_pages); + if (arg) { + if (cpuid_ecx(0x80000001) & CPUID_SVM) + svm_l1_guest_code(arg); + else + vmx_l1_guest_code(arg); + } GUEST_DONE(); } int main(int argc, char *argv[]) { - vm_vaddr_t vmx_pages_gva = 0; + vm_vaddr_t nested_gva = 0; struct kvm_regs regs1, regs2; struct kvm_vm *vm; @@ -136,8 +178,11 @@ int main(int argc, char *argv[]) vcpu_regs_get(vm, VCPU_ID, ®s1); if (kvm_check_cap(KVM_CAP_NESTED_STATE)) { - vcpu_alloc_vmx(vm, &vmx_pages_gva); - vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); + if (kvm_get_supported_cpuid_entry(0x80000001)->ecx & CPUID_SVM) + vcpu_alloc_svm(vm, &nested_gva); + else + vcpu_alloc_vmx(vm, &nested_gva); + vcpu_args_set(vm, VCPU_ID, 1, nested_gva); } else { pr_info("will skip nested state checks\n"); vcpu_args_set(vm, VCPU_ID, 1, 0); From patchwork Wed May 20 17:21:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11560883 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9EBF912 for ; Wed, 20 May 2020 17:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D595207F9 for ; Wed, 20 May 2020 17:22:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UoMQXL03" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728275AbgETRWc (ORCPT ); Wed, 20 May 2020 13:22:32 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:59714 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728250AbgETRWb (ORCPT ); Wed, 20 May 2020 13:22:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589995349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=R+/Wgs4EpdRHJ+M//PKLQIAi6WzuAJ7i4HLViCKSV/E=; b=UoMQXL03YeOLIUnnpMP8ZQAs21D+4s8JkzkXnVeLjs0Q4Y4TcMZ1PID/ApCHpIkMN7yKQ6 xQnW8/KDAfRhIObrYmI1CznMGD2d8eA99j8upKFMug0hW9sb2NktZH6bPrzAPLZpiGuX2I Stjw6NlQc3sPxHp/J6700OjQ8l4EbMQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-440-9wcHcTE1NUifyILaOAfyZA-1; Wed, 20 May 2020 13:22:25 -0400 X-MC-Unique: 9wcHcTE1NUifyILaOAfyZA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 770E788EF2B; Wed, 20 May 2020 17:22:16 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0989E60554; Wed, 20 May 2020 17:22:14 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: vkuznets@redhat.com, Joerg Roedel Subject: [PATCH 24/24] KVM: nSVM: implement KVM_GET_NESTED_STATE and KVM_SET_NESTED_STATE Date: Wed, 20 May 2020 13:21:45 -0400 Message-Id: <20200520172145.23284-25-pbonzini@redhat.com> In-Reply-To: <20200520172145.23284-1-pbonzini@redhat.com> References: <20200520172145.23284-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Similar to VMX, the state that is captured through the currently available IOCTLs is a mix of L1 and L2 state, dependent on whether the L2 guest was running at the moment when the process was interrupted to save its state. In particular, the SVM-specific state for nested virtualization includes the L1 saved state (including the interrupt flag), the cached L2 controls, and the GIF. Signed-off-by: Paolo Bonzini --- arch/x86/include/uapi/asm/kvm.h | 17 +++- arch/x86/kvm/cpuid.h | 5 ++ arch/x86/kvm/svm/nested.c | 147 ++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/vmx/nested.c | 5 -- arch/x86/kvm/x86.c | 3 +- 6 files changed, 171 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 3f3f780c8c65..12075a9de1c1 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -385,18 +385,22 @@ struct kvm_sync_regs { #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) #define KVM_STATE_NESTED_FORMAT_VMX 0 -#define KVM_STATE_NESTED_FORMAT_SVM 1 /* unused */ +#define KVM_STATE_NESTED_FORMAT_SVM 1 #define KVM_STATE_NESTED_GUEST_MODE 0x00000001 #define KVM_STATE_NESTED_RUN_PENDING 0x00000002 #define KVM_STATE_NESTED_EVMCS 0x00000004 #define KVM_STATE_NESTED_MTF_PENDING 0x00000008 +#define KVM_STATE_NESTED_GIF_SET 0x00000100 #define KVM_STATE_NESTED_SMM_GUEST_MODE 0x00000001 #define KVM_STATE_NESTED_SMM_VMXON 0x00000002 #define KVM_STATE_NESTED_VMX_VMCS_SIZE 0x1000 +#define KVM_STATE_NESTED_SVM_VMCB_SIZE 0x1000 + + struct kvm_vmx_nested_state_data { __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; __u8 shadow_vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; @@ -411,6 +415,15 @@ struct kvm_vmx_nested_state_hdr { } smm; }; +struct kvm_svm_nested_state_data { + /* Save area only used if KVM_STATE_NESTED_RUN_PENDING. */ + __u8 vmcb12[KVM_STATE_NESTED_SVM_VMCB_SIZE]; +}; + +struct kvm_svm_nested_state_hdr { + __u64 vmcb_pa; +}; + /* for KVM_CAP_NESTED_STATE */ struct kvm_nested_state { __u16 flags; @@ -419,6 +432,7 @@ struct kvm_nested_state { union { struct kvm_vmx_nested_state_hdr vmx; + struct kvm_svm_nested_state_hdr svm; /* Pad the header to 128 bytes. */ __u8 pad[120]; @@ -431,6 +445,7 @@ struct kvm_nested_state { */ union { struct kvm_vmx_nested_state_data vmx[0]; + struct kvm_svm_nested_state_data svm[0]; } data; }; diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 63a70f6a3df3..05434cd9342f 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -303,4 +303,9 @@ static __always_inline void kvm_cpu_cap_check_and_set(unsigned int x86_feature) kvm_cpu_cap_set(x86_feature); } +static inline bool page_address_valid(struct kvm_vcpu *vcpu, gpa_t gpa) +{ + return PAGE_ALIGNED(gpa) && !(gpa >> cpuid_maxphyaddr(vcpu)); +} + #endif diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 087a04ae74e4..001d1f830076 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -25,6 +25,7 @@ #include "trace.h" #include "mmu.h" #include "x86.h" +#include "cpuid.h" #include "lapic.h" #include "svm.h" @@ -243,6 +244,8 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, { copy_vmcb_control_area(&svm->nested.ctl, control); + /* Copy it here because nested_svm_check_controls will check it. */ + svm->nested.ctl.asid = control->asid; svm->nested.ctl.msrpm_base_pa &= ~0x0fffULL; svm->nested.ctl.iopm_base_pa &= ~0x0fffULL; } @@ -870,6 +873,150 @@ int nested_svm_exit_special(struct vcpu_svm *svm) return NESTED_EXIT_CONTINUE; } +static int svm_get_nested_state(struct kvm_vcpu *vcpu, + struct kvm_nested_state __user *user_kvm_nested_state, + u32 user_data_size) +{ + struct vcpu_svm *svm; + struct kvm_nested_state kvm_state = { + .flags = 0, + .format = KVM_STATE_NESTED_FORMAT_SVM, + .size = sizeof(kvm_state), + }; + struct vmcb __user *user_vmcb = (struct vmcb __user *) + &user_kvm_nested_state->data.svm[0]; + + if (!vcpu) + return kvm_state.size + KVM_STATE_NESTED_SVM_VMCB_SIZE; + + svm = to_svm(vcpu); + + if (user_data_size < kvm_state.size) + goto out; + + /* First fill in the header and copy it out. */ + if (is_guest_mode(vcpu)) { + kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb; + kvm_state.size += KVM_STATE_NESTED_SVM_VMCB_SIZE; + kvm_state.flags |= KVM_STATE_NESTED_GUEST_MODE; + + if (svm->nested.nested_run_pending) + kvm_state.flags |= KVM_STATE_NESTED_RUN_PENDING; + } + + if (gif_set(svm)) + kvm_state.flags |= KVM_STATE_NESTED_GIF_SET; + + if (copy_to_user(user_kvm_nested_state, &kvm_state, sizeof(kvm_state))) + return -EFAULT; + + if (!is_guest_mode(vcpu)) + goto out; + + /* + * Copy over the full size of the VMCB rather than just the size + * of the struct. + */ + if (memzero_user(user_vmcb, KVM_STATE_NESTED_SVM_VMCB_SIZE)) + return -EFAULT; + if (copy_to_user(&user_vmcb->control, &svm->nested.ctl, + sizeof(user_vmcb->control))) + return -EFAULT; + if (copy_to_user(&user_vmcb->save, &svm->nested.hsave->save, + sizeof(user_vmcb->save))) + return -EFAULT; + +out: + return kvm_state.size; +} + +static int svm_set_nested_state(struct kvm_vcpu *vcpu, + struct kvm_nested_state __user *user_kvm_nested_state, + struct kvm_nested_state *kvm_state) +{ + struct vcpu_svm *svm = to_svm(vcpu); + struct vmcb *hsave = svm->nested.hsave; + struct vmcb __user *user_vmcb = (struct vmcb __user *) + &user_kvm_nested_state->data.svm[0]; + struct vmcb_control_area ctl; + struct vmcb_save_area save; + u32 cr0; + + if (kvm_state->format != KVM_STATE_NESTED_FORMAT_SVM) + return -EINVAL; + + if (kvm_state->flags & ~(KVM_STATE_NESTED_GUEST_MODE | + KVM_STATE_NESTED_RUN_PENDING | + KVM_STATE_NESTED_GIF_SET)) + return -EINVAL; + + /* + * If in guest mode, vcpu->arch.efer actually refers to the L2 guest's + * EFER.SVME, but EFER.SVME still has to be 1 for VMRUN to succeed. + */ + if (!(vcpu->arch.efer & EFER_SVME)) { + /* GIF=1 and no guest mode are required if SVME=0. */ + if (kvm_state->flags != KVM_STATE_NESTED_GIF_SET) + return -EINVAL; + } + + /* SMM temporarily disables SVM, so we cannot be in guest mode. */ + if (is_smm(vcpu) && (kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) + return -EINVAL; + + if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) { + svm_leave_nested(svm); + goto out_set_gif; + } + + if (!page_address_valid(vcpu, kvm_state->hdr.svm.vmcb_pa)) + return -EINVAL; + if (kvm_state->size < sizeof(*kvm_state) + KVM_STATE_NESTED_SVM_VMCB_SIZE) + return -EINVAL; + if (copy_from_user(&ctl, &user_vmcb->control, sizeof(ctl))) + return -EFAULT; + if (copy_from_user(&save, &user_vmcb->save, sizeof(save))) + return -EFAULT; + + if (!nested_vmcb_check_controls(&ctl)) + return -EINVAL; + + /* + * Processor state contains L2 state. Check that it is + * valid for guest mode (see nested_vmcb_checks). + */ + cr0 = kvm_read_cr0(vcpu); + if (((cr0 & X86_CR0_CD) == 0) && (cr0 & X86_CR0_NW)) + return -EINVAL; + + /* + * Validate host state saved from before VMRUN (see + * nested_svm_check_permissions). + * TODO: validate reserved bits for all saved state. + */ + if (!(save.cr0 & X86_CR0_PG)) + return -EINVAL; + + /* + * All checks done, we can enter guest mode. L1 control fields + * come from the nested save state. Guest state is already + * in the registers, the save area of the nested state instead + * contains saved L1 state. + */ + copy_vmcb_control_area(&hsave->control, &svm->vmcb->control); + hsave->save = save; + + svm->nested.vmcb = kvm_state->hdr.svm.vmcb_pa; + load_nested_vmcb_control(svm, &ctl); + nested_prepare_vmcb_control(svm); + +out_set_gif: + svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET)); + return 0; +} + struct kvm_x86_nested_ops svm_nested_ops = { .check_events = svm_check_nested_events, + .get_state = svm_get_nested_state, + .set_state = svm_set_nested_state, }; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 56be704ffe95..f64d071715d2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1191,6 +1191,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->avic_is_running = true; svm->nested.hsave = page_address(hsave_page); + clear_page(svm->nested.hsave); svm->msrpm = page_address(msrpm_pages); svm_vcpu_init_msrpm(svm->msrpm); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 51ebb60e1533..106fc6fceb97 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -437,11 +437,6 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, } } -static bool page_address_valid(struct kvm_vcpu *vcpu, gpa_t gpa) -{ - return PAGE_ALIGNED(gpa) && !(gpa >> cpuid_maxphyaddr(vcpu)); -} - static int nested_vmx_check_io_bitmap_controls(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0001b2addc66..3e9252737f9d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4626,7 +4626,8 @@ long kvm_arch_vcpu_ioctl(struct file *filp, if (kvm_state.flags & ~(KVM_STATE_NESTED_RUN_PENDING | KVM_STATE_NESTED_GUEST_MODE - | KVM_STATE_NESTED_EVMCS | KVM_STATE_NESTED_MTF_PENDING)) + | KVM_STATE_NESTED_EVMCS | KVM_STATE_NESTED_MTF_PENDING + | KVM_STATE_NESTED_GIF_SET)) break; /* nested_run_pending implies guest_mode. */