From patchwork Wed Sep 13 14:06:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ladi Prosek X-Patchwork-Id: 9951461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7D49C603FB for ; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FBF0223C6 for ; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 64B15274D2; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAFD026E47 for ; Wed, 13 Sep 2017 14:07:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751084AbdIMOGx (ORCPT ); Wed, 13 Sep 2017 10:06:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48418 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751590AbdIMOGn (ORCPT ); Wed, 13 Sep 2017 10:06:43 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 795D4C047B8D for ; Wed, 13 Sep 2017 14:06:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 795D4C047B8D Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=lprosek@redhat.com Received: from dhcp-1-107.brq.redhat.com (ovpn-204-166.brq.redhat.com [10.40.204.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id A327B78206; Wed, 13 Sep 2017 14:06:42 +0000 (UTC) From: Ladi Prosek To: kvm@vger.kernel.org Cc: rkrcmar@redhat.com Subject: [PATCH 5/5] KVM: nSVM: fix SMI injection in guest mode Date: Wed, 13 Sep 2017 16:06:28 +0200 Message-Id: <20170913140628.7787-6-lprosek@redhat.com> In-Reply-To: <20170913140628.7787-1-lprosek@redhat.com> References: <20170913140628.7787-1-lprosek@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 13 Sep 2017 14:06:43 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Entering SMM while running in guest mode wasn't working very well because several pieces of the vcpu state were left set up for nested operation. Some of the issues observed: * L1 was getting unexpected VM exits (using L1 interception controls but running in SMM execution environment) * MMU was confused (walk_mmu was still set to nested_mmu) Intel SDM actually prescribes the logical processor to "leave VMX operation" upon entering SMM in 34.14.1 Default Treatment of SMI Delivery. AMD doesn't seem to document this but handling SMM in the same fashion is a safe bet. What we need to do is basically get out of guest mode for the duration of SMM. All this completely transparent to L1, i.e. L1 is not given control and no L1 observable state changes. To avoid code duplication this commit takes advantage of the existing nested vmexit and run functionality, perhaps at the cost of efficiency. To get out of guest mode, nested_svm_vmexit is called, unchanged. Re-entering is performed using enter_svm_guest_mode. This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with OVMF firmware (OVMF_CODE-need-smm.fd). Signed-off-by: Ladi Prosek --- arch/x86/kvm/svm.c | 37 +++++++++++++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 0f85062..b9ae1a2 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -152,6 +152,14 @@ struct nested_state { /* Nested Paging related state */ u64 nested_cr3; + + /* SMM related state */ + struct { + /* in guest mode on SMM entry? */ + bool guest_mode; + /* current nested VMCB on SMM entry */ + u64 vmcb; + } smm; }; #define MSRPM_OFFSETS 16 @@ -5365,13 +5373,38 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) static int svm_prep_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { - /* TODO: Implement */ + struct vcpu_svm *svm = to_svm(vcpu); + int ret; + + svm->nested.smm.guest_mode = is_guest_mode(vcpu); + if (svm->nested.smm.guest_mode) { + svm->nested.smm.vmcb = svm->nested.vmcb; + + svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; + svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; + svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; + + ret = nested_svm_vmexit(svm); + if (ret) + return ret; + } return 0; } static int svm_post_leave_smm(struct kvm_vcpu *vcpu) { - /* TODO: Implement */ + struct vcpu_svm *svm = to_svm(vcpu); + struct vmcb *nested_vmcb; + struct page *page; + + if (svm->nested.smm.guest_mode) { + nested_vmcb = nested_svm_map(svm, svm->nested.smm.vmcb, &page); + if (!nested_vmcb) + return 1; + enter_svm_guest_mode(svm, svm->nested.smm.vmcb, nested_vmcb, page); + + svm->nested.smm.guest_mode = false; + } return 0; }