From patchwork Fri May 15 17:41:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6517814C0 for ; Fri, 15 May 2020 17:42:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 50BBF2076A for ; Fri, 15 May 2020 17:42:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ShnRSmvL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726313AbgEORlx (ORCPT ); Fri, 15 May 2020 13:41:53 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:38166 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726179AbgEORlw (ORCPT ); Fri, 15 May 2020 13:41:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=obvdaqrDmOo0TZ7H8EUPrGh8lXy9dD8Hldwr4knbgD0=; b=ShnRSmvLeXh64VW5XDoTeIZrrEQvJBzCj6wLSV9bHNcPjRcExZU1L4rjJsxIwG9DaoRf6r vQPTmHhjLMt9+p/De0qPpJc5siJXaP/s0/FwWJMjEyNN4MJ1xDCCgI09v7DcnZubedvYiv Hu05w7YH2tTrT3Dsyb4a9b5qZLR5sSw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-142-FwbIl2CVOiiBBWetwE2nzw-1; Fri, 15 May 2020 13:41:48 -0400 X-MC-Unique: FwbIl2CVOiiBBWetwE2nzw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 43DEA83DE2B; Fri, 15 May 2020 17:41:47 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 979C210002CD; Fri, 15 May 2020 17:41:46 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 1/7] KVM: SVM: move map argument out of enter_svm_guest_mode Date: Fri, 15 May 2020 13:41:38 -0400 Message-Id: <20200515174144.1727-2-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unmapping the nested VMCB in enter_svm_guest_mode is a bit of a wart, since the map is not used elsewhere in the function. There are just two calls, so move it there. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 14 ++++++-------- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index a89a166d1cb8..22f75f66084f 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -226,7 +226,7 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) } void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb, struct kvm_host_map *map) + struct vmcb *nested_vmcb) { bool evaluate_pending_interrupts = is_intercept(svm, INTERCEPT_VINTR) || @@ -305,8 +305,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->control.pause_filter_thresh = nested_vmcb->control.pause_filter_thresh; - kvm_vcpu_unmap(&svm->vcpu, map, true); - /* Enter Guest-Mode */ enter_guest_mode(&svm->vcpu); @@ -369,10 +367,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb->control.exit_code_hi = 0; nested_vmcb->control.exit_info_1 = 0; nested_vmcb->control.exit_info_2 = 0; - - kvm_vcpu_unmap(&svm->vcpu, &map, true); - - return ret; + goto out; } trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, @@ -415,7 +410,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); svm->nested.nested_run_pending = 1; - enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, &map); + enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb); if (!nested_svm_vmrun_msrpm(svm)) { svm->vmcb->control.exit_code = SVM_EXIT_ERR; @@ -426,6 +421,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm) nested_svm_vmexit(svm); } +out: + kvm_vcpu_unmap(&svm->vcpu, &map, true); + return ret; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4e9cd2a73ad0..dc12a03d16f6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3843,7 +3843,8 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb), &map) == -EINVAL) return 1; nested_vmcb = map.hva; - enter_svm_guest_mode(svm, vmcb, nested_vmcb, &map); + enter_svm_guest_mode(svm, vmcb, nested_vmcb); + kvm_vcpu_unmap(&svm->vcpu, &map, true); } return 0; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5cc559ab862d..730eb7242930 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -397,7 +397,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) } void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb, struct kvm_host_map *map); + struct vmcb *nested_vmcb); int nested_svm_vmrun(struct vcpu_svm *svm); void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); int nested_svm_vmexit(struct vcpu_svm *svm); From patchwork Fri May 15 17:41:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DBDE359D for ; Fri, 15 May 2020 17:42:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C5EA9207ED for ; Fri, 15 May 2020 17:42:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fn2dM5QY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726642AbgEORl5 (ORCPT ); Fri, 15 May 2020 13:41:57 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:38476 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726521AbgEORl4 (ORCPT ); Fri, 15 May 2020 13:41:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=8eXWyfVvTfwgxocNU4baj2GQMH1oLZpyjDe4vFpeDPQ=; b=fn2dM5QYjtfeRXBvcJwlrlPgiIgk2cmeA+V/r2FCzQdSUik125M32aQb1ZNbc3Iepn77Ta BQHGB1KzmrLzJa52Jkl55wjzntBrBeqdFuzA75rRpFXQVY7Axd856QS9FW/6RQsRey2dTQ BLMjomDXQLwC/Ma8f+izm3G9SHQDG/s= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-191-UW2S2PzeNjGf4ztG6-k39g-1; Fri, 15 May 2020 13:41:49 -0400 X-MC-Unique: UW2S2PzeNjGf4ztG6-k39g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 10B5219200C2; Fri, 15 May 2020 17:41:48 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 69F6210002CD; Fri, 15 May 2020 17:41:47 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 2/7] KVM: SVM: extract load_nested_vmcb_control Date: Fri, 15 May 2020 13:41:39 -0400 Message-Id: <20200515174144.1727-3-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When restoring SVM nested state, the control state will be stored already in svm->nested by KVM_SET_NESTED_STATE. We will not need to fish it out of L1's VMCB. Pull everything into a separate function so that it is documented which fields are needed. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 45 ++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 22f75f66084f..e79acc852000 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -225,6 +225,27 @@ static bool nested_vmcb_checks(struct vmcb *vmcb) return true; } +static void load_nested_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +{ + if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) + svm->vcpu.arch.hflags |= HF_HIF_MASK; + else + svm->vcpu.arch.hflags &= ~HF_HIF_MASK; + + svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3; + + svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL; + svm->nested.vmcb_iopm = nested_vmcb->control.iopm_base_pa & ~0x0fffULL; + + /* cache intercepts */ + svm->nested.intercept_cr = nested_vmcb->control.intercept_cr; + svm->nested.intercept_dr = nested_vmcb->control.intercept_dr; + svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions; + svm->nested.intercept = nested_vmcb->control.intercept; + + svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset; +} + void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb) { @@ -232,15 +253,11 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, is_intercept(svm, INTERCEPT_VINTR) || is_intercept(svm, INTERCEPT_IRET); - if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF) - svm->vcpu.arch.hflags |= HF_HIF_MASK; - else - svm->vcpu.arch.hflags &= ~HF_HIF_MASK; + svm->nested.vmcb = vmcb_gpa; + load_nested_vmcb_control(svm, nested_vmcb); - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) { - svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3; + if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) nested_svm_init_mmu_context(&svm->vcpu); - } /* Load the nested guest state */ svm->vmcb->save.es = nested_vmcb->save.es; @@ -275,25 +292,15 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; svm->vmcb->save.cpl = nested_vmcb->save.cpl; - svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL; - svm->nested.vmcb_iopm = nested_vmcb->control.iopm_base_pa & ~0x0fffULL; - - /* cache intercepts */ - svm->nested.intercept_cr = nested_vmcb->control.intercept_cr; - svm->nested.intercept_dr = nested_vmcb->control.intercept_dr; - svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions; - svm->nested.intercept = nested_vmcb->control.intercept; - svm_flush_tlb(&svm->vcpu); - svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) svm->vcpu.arch.hflags |= HF_VINTR_MASK; else svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; - svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset; + svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext; svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; svm->vmcb->control.int_state = nested_vmcb->control.int_state; @@ -314,8 +321,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, */ recalc_intercepts(svm); - svm->nested.vmcb = vmcb_gpa; - /* * If L1 had a pending IRQ/NMI before executing VMRUN, * which wasn't delivered because it was disallowed (e.g. From patchwork Fri May 15 17:41:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552827 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3944912 for ; Fri, 15 May 2020 17:41:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA86620756 for ; Fri, 15 May 2020 17:41:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="C1r90wSf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726558AbgEORl4 (ORCPT ); Fri, 15 May 2020 13:41:56 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:55833 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726492AbgEORlz (ORCPT ); Fri, 15 May 2020 13:41:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564513; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=L5glrC7ruHWRvC4BN/6woG/tfXJGi9h4QDq5CC9pJHU=; b=C1r90wSf41n27CQ/YNKoS5bbCC0S9DTYsgf7Ma63QB3J1wPxotjlQN5JtGKz39HxtTy46O l1xCcunbxI6xDUoVFf5xdVMhQm6ALltxjkHWHChn4e7fHoEXEMjxCbVd9GVu8K0cDzx0HH FnKbHa4pfN6xkuvxoXucKLf4dIndrgY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-146-QECq06BkN_SVHAtpJRXoig-1; Fri, 15 May 2020 13:41:49 -0400 X-MC-Unique: QECq06BkN_SVHAtpJRXoig-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D37781005512; Fri, 15 May 2020 17:41:48 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 366D110002CD; Fri, 15 May 2020 17:41:48 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 3/7] KVM: SVM: extract preparation of VMCB for nested run Date: Fri, 15 May 2020 13:41:40 -0400 Message-Id: <20200515174144.1727-4-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out filling svm->vmcb.save and svm->vmcb.control before VMRUN. Only the latter will be useful when restoring nested SVM state. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 42 +++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e79acc852000..7807f6cc01fc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -246,19 +246,8 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_v svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset; } -void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb) +static void load_nested_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) { - bool evaluate_pending_interrupts = - is_intercept(svm, INTERCEPT_VINTR) || - is_intercept(svm, INTERCEPT_IRET); - - svm->nested.vmcb = vmcb_gpa; - load_nested_vmcb_control(svm, nested_vmcb); - - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) - nested_svm_init_mmu_context(&svm->vcpu); - /* Load the nested guest state */ svm->vmcb->save.es = nested_vmcb->save.es; svm->vmcb->save.cs = nested_vmcb->save.cs; @@ -276,9 +265,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, } else (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); - /* Guest paging mode is active - reset mmu */ - kvm_mmu_reset_context(&svm->vcpu); - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp); @@ -291,6 +277,15 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->vmcb->save.dr7 = nested_vmcb->save.dr7; svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; svm->vmcb->save.cpl = nested_vmcb->save.cpl; +} + +static void nested_prepare_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +{ + if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) + nested_svm_init_mmu_context(&svm->vcpu); + + /* Guest paging mode is active - reset mmu */ + kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) @@ -321,6 +316,21 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, */ recalc_intercepts(svm); + mark_all_dirty(svm->vmcb); +} + +void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, + struct vmcb *nested_vmcb) +{ + bool evaluate_pending_interrupts = + is_intercept(svm, INTERCEPT_VINTR) || + is_intercept(svm, INTERCEPT_IRET); + + svm->nested.vmcb = vmcb_gpa; + load_nested_vmcb_control(svm, nested_vmcb); + load_nested_vmcb_save(svm, nested_vmcb); + nested_prepare_vmcb_control(svm, nested_vmcb); + /* * If L1 had a pending IRQ/NMI before executing VMRUN, * which wasn't delivered because it was disallowed (e.g. @@ -336,8 +346,6 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, enable_gif(svm); if (unlikely(evaluate_pending_interrupts)) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); - - mark_all_dirty(svm->vmcb); } int nested_svm_vmrun(struct vcpu_svm *svm) From patchwork Fri May 15 17:41:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2472659D for ; Fri, 15 May 2020 17:42:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0868920756 for ; Fri, 15 May 2020 17:42:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MhD0CgqY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726730AbgEORmX (ORCPT ); Fri, 15 May 2020 13:42:23 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:23151 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726290AbgEORly (ORCPT ); Fri, 15 May 2020 13:41:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564512; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=TIWcd4+TI33lUHU9iof3DvH/a4WvLIe/J3URqYcylYo=; b=MhD0CgqYs2XPm1clxDRQtvJ6zhApRckIJMeTy53LrpSKnFGF7PYLjAxNIiT3N/Rpetv9NV jdODby20PHurQn0TMoKIPvA2hZCmkgxTz4doiJg6PLXJU+uJ8Agqq+tQtq7+JO2cvkS2rK 9p3UXDWXh7sHX2KKrcbp6K6X2RZe848= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-451-VYqjRigbN6CbjL118ySnVg-1; Fri, 15 May 2020 13:41:50 -0400 X-MC-Unique: VYqjRigbN6CbjL118ySnVg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0BB7835B40; Fri, 15 May 2020 17:41:49 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 04DC510002CD; Fri, 15 May 2020 17:41:48 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 4/7] KVM: SVM: save all control fields in svm->nested Date: Fri, 15 May 2020 13:41:41 -0400 Message-Id: <20200515174144.1727-5-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In preparation for nested SVM save/restore, store all data that matters from the VMCB control area into svm->nested. It will then become part of the nested SVM state that is saved by KVM_SET_NESTED_STATE and restored by KVM_GET_NESTED_STATE, just like the cached vmcs12 for nVMX. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 37 ++++++++++++++++++++++--------------- arch/x86/kvm/svm/svm.c | 6 ++++++ arch/x86/kvm/svm/svm.h | 22 ++++++++++++++++------ 3 files changed, 44 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 7807f6cc01fc..54be341322d8 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -237,7 +237,16 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_v svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL; svm->nested.vmcb_iopm = nested_vmcb->control.iopm_base_pa & ~0x0fffULL; - /* cache intercepts */ + svm->nested.nested_ctl = nested_vmcb->control.nested_ctl; + svm->nested.int_ctl = nested_vmcb->control.int_ctl; + svm->nested.virt_ext = nested_vmcb->control.virt_ext; + svm->nested.int_vector = nested_vmcb->control.int_vector; + svm->nested.int_state = nested_vmcb->control.int_state; + svm->nested.event_inj = nested_vmcb->control.event_inj; + svm->nested.event_inj_err = nested_vmcb->control.event_inj_err; + svm->nested.pause_filter_count = nested_vmcb->control.pause_filter_count; + svm->nested.pause_filter_thresh = nested_vmcb->control.pause_filter_thresh; + svm->nested.intercept_cr = nested_vmcb->control.intercept_cr; svm->nested.intercept_dr = nested_vmcb->control.intercept_dr; svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions; @@ -279,33 +288,31 @@ static void load_nested_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb svm->vmcb->save.cpl = nested_vmcb->save.cpl; } -static void nested_prepare_vmcb_control(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +static void nested_prepare_vmcb_control(struct vcpu_svm *svm) { - if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) + if (svm->nested.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) nested_svm_init_mmu_context(&svm->vcpu); /* Guest paging mode is active - reset mmu */ kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); - if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) + if (svm->nested.int_ctl & V_INTR_MASKING_MASK) svm->vcpu.arch.hflags |= HF_VINTR_MASK; else svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset; - svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; - svm->vmcb->control.virt_ext = nested_vmcb->control.virt_ext; - svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; - svm->vmcb->control.int_state = nested_vmcb->control.int_state; - svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; - svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; + svm->vmcb->control.int_ctl = svm->nested.int_ctl | V_INTR_MASKING_MASK; + svm->vmcb->control.virt_ext = svm->nested.virt_ext; + svm->vmcb->control.int_vector = svm->nested.int_vector; + svm->vmcb->control.int_state = svm->nested.int_state; + svm->vmcb->control.event_inj = svm->nested.event_inj; + svm->vmcb->control.event_inj_err = svm->nested.event_inj_err; - svm->vmcb->control.pause_filter_count = - nested_vmcb->control.pause_filter_count; - svm->vmcb->control.pause_filter_thresh = - nested_vmcb->control.pause_filter_thresh; + svm->vmcb->control.pause_filter_count = svm->nested.pause_filter_count; + svm->vmcb->control.pause_filter_thresh = svm->nested.pause_filter_thresh; /* Enter Guest-Mode */ enter_guest_mode(&svm->vcpu); @@ -329,7 +336,7 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, svm->nested.vmcb = vmcb_gpa; load_nested_vmcb_control(svm, nested_vmcb); load_nested_vmcb_save(svm, nested_vmcb); - nested_prepare_vmcb_control(svm, nested_vmcb); + nested_prepare_vmcb_control(svm); /* * If L1 had a pending IRQ/NMI before executing VMRUN, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index dc12a03d16f6..2b63d15328ba 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3343,6 +3343,12 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely(svm->nested.exit_required)) return EXIT_FASTPATH_NONE; + if (unlikely(svm->nested.nested_run_pending)) { + /* After this vmentry, these fields will be used up. */ + svm->nested.event_inj = 0; + svm->nested.event_inj_err = 0; + } + /* * Disable singlestep if we're injecting an interrupt/exception. * We don't want our modified rflags to be pushed on the stack where diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 730eb7242930..5cabed9c733a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -90,10 +90,6 @@ struct nested_state { /* These are the merged vectors */ u32 *msrpm; - /* gpa pointers to the real vectors */ - u64 vmcb_msrpm; - u64 vmcb_iopm; - /* A VMEXIT is required but not yet emulated */ bool exit_required; @@ -101,13 +97,27 @@ struct nested_state { * we cannot inject a nested vmexit yet. */ bool nested_run_pending; - /* cache for intercepts of the guest */ + /* cache for control fields of the guest */ + u64 vmcb_msrpm; + u64 vmcb_iopm; + u32 intercept_cr; u32 intercept_dr; u32 intercept_exceptions; u64 intercept; - /* Nested Paging related state */ + u32 event_inj; + u32 event_inj_err; + + u64 virt_ext; + u32 int_ctl; + u32 int_vector; + u32 int_state; + + u16 pause_filter_thresh; + u16 pause_filter_count; + + u64 nested_ctl; u64 nested_cr3; }; From patchwork Fri May 15 17:41:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552825 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 829B459D for ; Fri, 15 May 2020 17:41:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D20620756 for ; Fri, 15 May 2020 17:41:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="S8SZsqtN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726528AbgEORlz (ORCPT ); Fri, 15 May 2020 13:41:55 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:30585 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726144AbgEORly (ORCPT ); Fri, 15 May 2020 13:41:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564513; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=sT1ECfXJbWwVfrF28gOik3nME7pz8YvamFdmjiZK1Ic=; b=S8SZsqtN6TLRSgSnn/ibs6XquTd8kDQkVPNuMdDiINhruMqvW517Zy+NHs1Y9qH7qsDkm9 ChwOCDX+WFSD4nijIM836q/u0l7VZgttx3sJMwThMPrZLs66kQFQGLP+Nju6szoQiXd7S8 Kd2K9kbVjc9ZkjcM+MP4mJndXQVwjvo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-187-bt3yTyiuPcW5WvIgd35C8Q-1; Fri, 15 May 2020 13:41:51 -0400 X-MC-Unique: bt3yTyiuPcW5WvIgd35C8Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6E2DF108BD09; Fri, 15 May 2020 17:41:50 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id C734710002CD; Fri, 15 May 2020 17:41:49 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 5/7] KVM: nSVM: remove HF_VINTR_MASK Date: Fri, 15 May 2020 13:41:42 -0400 Message-Id: <20200515174144.1727-6-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the int_ctl field is stored in svm->nested.int_ctl, we can use it instead of vcpu->arch.hflags. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/svm/nested.c | 12 ++++-------- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 4 +++- 4 files changed, 8 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fd78bd44b2d6..6c8417d01bf9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1594,7 +1594,6 @@ enum { #define HF_GIF_MASK (1 << 0) #define HF_HIF_MASK (1 << 1) -#define HF_VINTR_MASK (1 << 2) #define HF_NMI_MASK (1 << 3) #define HF_IRET_MASK (1 << 4) #define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 54be341322d8..e3338aa8b0a3 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -116,13 +116,13 @@ void recalc_intercepts(struct vcpu_svm *svm) c->intercept_exceptions = h->intercept_exceptions; c->intercept = h->intercept; - if (svm->vcpu.arch.hflags & HF_VINTR_MASK) { + if (svm->nested.int_ctl & V_INTR_MASKING_MASK) { /* We only want the cr8 intercept bits of L1 */ c->intercept_cr &= ~(1U << INTERCEPT_CR8_READ); c->intercept_cr &= ~(1U << INTERCEPT_CR8_WRITE); /* - * Once running L2 with HF_VINTR_MASK, EFLAGS.IF does not + * Once running L2 with V_INTR_MASKING set, EFLAGS.IF does not * affect any interrupt we may want to inject; therefore, * interrupt window vmexits are irrelevant to L0. */ @@ -297,10 +297,6 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) kvm_mmu_reset_context(&svm->vcpu); svm_flush_tlb(&svm->vcpu); - if (svm->nested.int_ctl & V_INTR_MASKING_MASK) - svm->vcpu.arch.hflags |= HF_VINTR_MASK; - else - svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset; @@ -553,8 +549,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.pause_filter_thresh = svm->vmcb->control.pause_filter_thresh; - /* We always set V_INTR_MASKING and remember the old value in hflags */ - if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) + /* We always set V_INTR_MASKING and remember the old value in svm->nested */ + if (!(svm->nested.int_ctl & V_INTR_MASKING_MASK)) nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; /* Restore the original control entries */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2b63d15328ba..95d16aa76ebb 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3096,7 +3096,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu)) { /* As long as interrupts are being delivered... */ - if ((svm->vcpu.arch.hflags & HF_VINTR_MASK) + if ((svm->nested.int_ctl & V_INTR_MASKING_MASK) ? !(svm->vcpu.arch.hflags & HF_HIF_MASK) : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) return true; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5cabed9c733a..39706aa845f2 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -388,7 +388,9 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) { - return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK); + struct vcpu_svm *svm = to_svm(vcpu); + + return is_guest_mode(vcpu) && (svm->nested.int_ctl & V_INTR_MASKING_MASK); } static inline bool nested_exit_on_smi(struct vcpu_svm *svm) From patchwork Fri May 15 17:41:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C103959D for ; Fri, 15 May 2020 17:42:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A90852070A for ; Fri, 15 May 2020 17:42:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GNpCPjOu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726715AbgEORmK (ORCPT ); Fri, 15 May 2020 13:42:10 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:23374 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726585AbgEORl5 (ORCPT ); Fri, 15 May 2020 13:41:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Q9NYEIPN4sY5xLXHC6u/MLKHvdmibx5ZlJo97Gfa99w=; b=GNpCPjOuzBmEEi7L4B9SxU7uRA/j9AyKD/IqQTQP0M/CED/3VaKmFMSPtIZGSZ98Zgbcec f+LiRIhVo9mosByKrFhFv7BnFK3GbhLjTd3VWZTxDgNRrQRuOlwHY84Q/RBKXLzZ0CZHvq 6m5pA6WznjEw0KYjuYqRF6RhNXvwR4M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-304-j53BVrV-O3icE3cS__9I7A-1; Fri, 15 May 2020 13:41:52 -0400 X-MC-Unique: j53BVrV-O3icE3cS__9I7A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3C61B8014D7; Fri, 15 May 2020 17:41:51 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9488F10002CD; Fri, 15 May 2020 17:41:50 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 6/7] KVM: nSVM: do not reload pause filter fields from VMCB Date: Fri, 15 May 2020 13:41:43 -0400 Message-Id: <20200515174144.1727-7-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These fields do not change from VMRUN to VMEXIT; there is no need to reload them on nested VMEXIT. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e3338aa8b0a3..ba7dedbcc985 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -544,11 +544,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.event_inj = 0; nested_vmcb->control.event_inj_err = 0; - nested_vmcb->control.pause_filter_count = - svm->vmcb->control.pause_filter_count; - nested_vmcb->control.pause_filter_thresh = - svm->vmcb->control.pause_filter_thresh; - /* We always set V_INTR_MASKING and remember the old value in svm->nested */ if (!(svm->nested.int_ctl & V_INTR_MASKING_MASK)) nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; From patchwork Fri May 15 17:41:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11552829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F32E2912 for ; Fri, 15 May 2020 17:42:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCF8F2076A for ; Fri, 15 May 2020 17:42:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Fwsouug+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726665AbgEORl7 (ORCPT ); Fri, 15 May 2020 13:41:59 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:58809 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726615AbgEORl6 (ORCPT ); Fri, 15 May 2020 13:41:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589564517; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=3QVWKpoXf+LG4l+3jzo+2GrQilylKP/gdKsYh3UzKn4=; b=Fwsouug+enjOMuX+cUyPjOXmrDGkwBwkXn0HSgv4yrz0iojG8mx6ycoBsc7xOKhC1nWJX2 T4D3vtOeKmJrQ6sbTL73AYTph8+tdkKL0dS/i1l2bpg/mf5eBhfe+RvLDtJCwx5t0xJKSu CihsgDDHIOoWEQlVQEm5dCv63yiTLD4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-324-ZxHHY3L7Pla_4kbrpbIOhQ-1; Fri, 15 May 2020 13:41:53 -0400 X-MC-Unique: ZxHHY3L7Pla_4kbrpbIOhQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 09F61107ACCD; Fri, 15 May 2020 17:41:52 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 634D910002CD; Fri, 15 May 2020 17:41:51 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cathy Avery , Liran Alon , Jim Mattson Subject: [PATCH 7/7] KVM: SVM: introduce data structures for nested virt state Date: Fri, 15 May 2020 13:41:44 -0400 Message-Id: <20200515174144.1727-8-pbonzini@redhat.com> In-Reply-To: <20200515174144.1727-1-pbonzini@redhat.com> References: <20200515174144.1727-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Paolo Bonzini --- arch/x86/include/uapi/asm/kvm.h | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 3f3f780c8c65..cdca0fd1b107 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -385,7 +385,7 @@ struct kvm_sync_regs { #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) #define KVM_STATE_NESTED_FORMAT_VMX 0 -#define KVM_STATE_NESTED_FORMAT_SVM 1 /* unused */ +#define KVM_STATE_NESTED_FORMAT_SVM 1 #define KVM_STATE_NESTED_GUEST_MODE 0x00000001 #define KVM_STATE_NESTED_RUN_PENDING 0x00000002 @@ -395,8 +395,14 @@ struct kvm_sync_regs { #define KVM_STATE_NESTED_SMM_GUEST_MODE 0x00000001 #define KVM_STATE_NESTED_SMM_VMXON 0x00000002 +#define KVM_STATE_NESTED_SVM_VMENTRY_IF 0x00000001 +#define KVM_STATE_NESTED_SVM_GIF 0x00000002 + #define KVM_STATE_NESTED_VMX_VMCS_SIZE 0x1000 +#define KVM_STATE_NESTED_SVM_VMCB_SIZE 0x1000 + + struct kvm_vmx_nested_state_data { __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; __u8 shadow_vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; @@ -411,6 +417,17 @@ struct kvm_vmx_nested_state_hdr { } smm; }; +struct kvm_svm_nested_state_data { + /* Save area only used if KVM_STATE_NESTED_RUN_PENDING. */ + __u8 vmcb12[KVM_STATE_NESTED_SVM_VMCB_SIZE]; +}; + +struct kvm_svm_nested_state_hdr { + __u64 vmcb_pa; + + __u16 interrupt_flags; +}; + /* for KVM_CAP_NESTED_STATE */ struct kvm_nested_state { __u16 flags; @@ -419,6 +441,7 @@ struct kvm_nested_state { union { struct kvm_vmx_nested_state_hdr vmx; + struct kvm_svm_nested_state_hdr svm; /* Pad the header to 128 bytes. */ __u8 pad[120]; @@ -431,6 +454,7 @@ struct kvm_nested_state { */ union { struct kvm_vmx_nested_state_data vmx[0]; + struct kvm_svm_nested_state_data svm[0]; } data; };