From patchwork Mon Sep 25 08:09:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ladi Prosek X-Patchwork-Id: 9969409 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 872F16038E for ; Mon, 25 Sep 2017 08:09:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7FCE828C0C for ; Mon, 25 Sep 2017 08:09:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7508628C1D; Mon, 25 Sep 2017 08:09:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F009428C0C for ; Mon, 25 Sep 2017 08:09:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933895AbdIYIJ1 (ORCPT ); Mon, 25 Sep 2017 04:09:27 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42244 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933359AbdIYIJV (ORCPT ); Mon, 25 Sep 2017 04:09:21 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D2D41267C6 for ; Mon, 25 Sep 2017 08:09:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com D2D41267C6 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=lprosek@redhat.com Received: from dhcp-1-107.brq.redhat.com (unknown [10.43.2.215]) by smtp.corp.redhat.com (Postfix) with ESMTP id DCDE9612B1; Mon, 25 Sep 2017 08:09:19 +0000 (UTC) From: Ladi Prosek To: kvm@vger.kernel.org Cc: rkrcmar@redhat.com, pbonzini@redhat.com Subject: [PATCH v3 6/6] KVM: nSVM: fix SMI injection in guest mode Date: Mon, 25 Sep 2017 10:09:04 +0200 Message-Id: <20170925080904.24850-7-lprosek@redhat.com> In-Reply-To: <20170925080904.24850-1-lprosek@redhat.com> References: <20170925080904.24850-1-lprosek@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 25 Sep 2017 08:09:20 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Entering SMM while running in guest mode wasn't working very well because several pieces of the vcpu state were left set up for nested operation. Some of the issues observed: * L1 was getting unexpected VM exits (using L1 interception controls but running in SMM execution environment) * MMU was confused (walk_mmu was still set to nested_mmu) * INTERCEPT_SMI was not emulated for L1 (KVM never injected SVM_EXIT_SMI) Intel SDM actually prescribes the logical processor to "leave VMX operation" upon entering SMM in 34.14.1 Default Treatment of SMI Delivery. AMD doesn't seem to document this but they provide fields in the SMM state-save area to stash the current state of SVM. What we need to do is basically get out of guest mode for the duration of SMM. All this completely transparent to L1, i.e. L1 is not given control and no L1 observable state changes. To avoid code duplication this commit takes advantage of the existing nested vmexit and run functionality, perhaps at the cost of efficiency. To get out of guest mode, nested_svm_vmexit is called, unchanged. Re-entering is performed using enter_svm_guest_mode. This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with OVMF firmware (OVMF_CODE-need-smm.fd). Signed-off-by: Ladi Prosek --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/svm.c | 56 +++++++++++++++++++++++++++++++++++++++-- arch/x86/kvm/x86.c | 3 --- 3 files changed, 57 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2445b2ba26f9..e582b8c9579b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1430,4 +1430,7 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #endif } +#define put_smstate(type, buf, offset, val) \ + *(type *)((buf) + (offset) - 0x7e00) = val + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index b9f19d715fc3..416ec56d6715 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5401,18 +5401,70 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) static int svm_smi_allowed(struct kvm_vcpu *vcpu) { + struct vcpu_svm *svm = to_svm(vcpu); + + /* Per APM Vol.2 15.22.2 "Response to SMI" */ + if (!gif_set(svm)) + return 0; + + if (is_guest_mode(&svm->vcpu) && + svm->nested.intercept & (1ULL << INTERCEPT_SMI)) { + /* TODO: Might need to set exit_info_1 and exit_info_2 here */ + svm->vmcb->control.exit_code = SVM_EXIT_SMI; + svm->nested.exit_required = true; + return 0; + } + return 1; } static int svm_prep_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { - /* TODO: Implement */ + struct vcpu_svm *svm = to_svm(vcpu); + int ret; + + if (is_guest_mode(vcpu)) { + /* FED8h - SVM Guest */ + put_smstate(u64, smstate, 0x7ed8, 1); + /* FEE0h - SVM Guest VMCB Physical Address */ + put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb); + + svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; + svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; + svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; + + ret = nested_svm_vmexit(svm); + if (ret) + return ret; + } return 0; } static int svm_post_leave_smm(struct kvm_vcpu *vcpu, u64 smbase) { - /* TODO: Implement */ + struct vcpu_svm *svm = to_svm(vcpu); + struct vmcb *nested_vmcb; + struct page *page; + struct { + u64 guest; + u64 vmcb; + } svm_state_save; + int r; + + /* Temporarily set the SMM flag to access the SMM state-save area */ + vcpu->arch.hflags |= HF_SMM_MASK; + r = kvm_vcpu_read_guest(vcpu, smbase + 0xfed8, &svm_state_save, + sizeof(svm_state_save)); + vcpu->arch.hflags &= ~HF_SMM_MASK; + if (r) + return r; + + if (svm_state_save.guest) { + nested_vmcb = nested_svm_map(svm, svm_state_save.vmcb, &page); + if (!nested_vmcb) + return 1; + enter_svm_guest_mode(svm, svm_state_save.vmcb, nested_vmcb, page); + } return 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5c4c49e8e660..41aa1da599bc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6481,9 +6481,6 @@ static void process_nmi(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); } -#define put_smstate(type, buf, offset, val) \ - *(type *)((buf) + (offset) - 0x7e00) = val - static u32 enter_smm_get_segment_flags(struct kvm_segment *seg) { u32 flags = 0;