From patchwork Wed Sep 13 14:06:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ladi Prosek X-Patchwork-Id: 9951457 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 328BF603F4 for ; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 247FD223C6 for ; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 196C3274A3; Wed, 13 Sep 2017 14:07:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B35726E3A for ; Wed, 13 Sep 2017 14:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751803AbdIMOHL (ORCPT ); Wed, 13 Sep 2017 10:07:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37031 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751724AbdIMOGi (ORCPT ); Wed, 13 Sep 2017 10:06:38 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2AA284E334 for ; Wed, 13 Sep 2017 14:06:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 2AA284E334 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=lprosek@redhat.com Received: from dhcp-1-107.brq.redhat.com (ovpn-204-166.brq.redhat.com [10.40.204.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D6EC17F51; Wed, 13 Sep 2017 14:06:37 +0000 (UTC) From: Ladi Prosek To: kvm@vger.kernel.org Cc: rkrcmar@redhat.com Subject: [PATCH 1/5] KVM: x86: introduce ISA specific SMM entry/exit callbacks Date: Wed, 13 Sep 2017 16:06:24 +0200 Message-Id: <20170913140628.7787-2-lprosek@redhat.com> In-Reply-To: <20170913140628.7787-1-lprosek@redhat.com> References: <20170913140628.7787-1-lprosek@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 13 Sep 2017 14:06:38 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Entering and exiting SMM may require ISA specific handling under certain circumstances. This commit adds two new callbacks with empty implementations. Actual functionality will be added in following commits. * prep_enter_smm() is to be called when injecting an SMM, before any SMM related vcpu state has been changed * post_leave_smm() is to be called when emulating the RSM instruction, after all SMM related vcpu state has been restored Signed-off-by: Ladi Prosek --- arch/x86/include/asm/kvm_emulate.h | 1 + arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/emulate.c | 2 ++ arch/x86/kvm/svm.c | 15 +++++++++++++++ arch/x86/kvm/vmx.c | 15 +++++++++++++++ arch/x86/kvm/x86.c | 6 ++++++ 6 files changed, 42 insertions(+) diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h index fde36f1..8d56703 100644 --- a/arch/x86/include/asm/kvm_emulate.h +++ b/arch/x86/include/asm/kvm_emulate.h @@ -298,6 +298,7 @@ struct x86_emulate_ctxt { bool perm_ok; /* do not check permissions if true */ bool ud; /* inject an #UD if host doesn't support insn */ bool tf; /* TF value before instruction (after for syscall/sysret) */ + bool left_smm; /* post_leave_smm() needs to be called after emulation */ bool have_exception; struct x86_exception exception; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 92c9032..26acdb3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1058,6 +1058,9 @@ struct kvm_x86_ops { void (*cancel_hv_timer)(struct kvm_vcpu *vcpu); void (*setup_mce)(struct kvm_vcpu *vcpu); + + int (*prep_enter_smm)(struct kvm_vcpu *vcpu, char *smstate); + int (*post_leave_smm)(struct kvm_vcpu *vcpu); }; struct kvm_arch_async_pf { diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index fb00559..5faaf85 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2601,6 +2601,8 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) & ~(X86EMUL_SMM_INSIDE_NMI_MASK | X86EMUL_SMM_MASK)); + ctxt->left_smm = true; + return X86EMUL_CONTINUE; } diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index af256b7..0c5a599 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5357,6 +5357,18 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) vcpu->arch.mcg_cap &= 0x1ff; } +static int svm_prep_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + /* TODO: Implement */ + return 0; +} + +static int svm_post_leave_smm(struct kvm_vcpu *vcpu) +{ + /* TODO: Implement */ + return 0; +} + static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .cpu_has_kvm_support = has_svm, .disabled_by_bios = is_disabled, @@ -5467,6 +5479,9 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .deliver_posted_interrupt = svm_deliver_avic_intr, .update_pi_irte = svm_update_pi_irte, .setup_mce = svm_setup_mce, + + .prep_enter_smm = svm_prep_enter_smm, + .post_leave_smm = svm_post_leave_smm, }; static int __init svm_init(void) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index c6ef294..d56a528 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11629,6 +11629,18 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) ~FEATURE_CONTROL_LMCE; } +static int vmx_prep_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + /* TODO: Implement */ + return 0; +} + +static int vmx_post_leave_smm(struct kvm_vcpu *vcpu) +{ + /* TODO: Implement */ + return 0; +} + static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .cpu_has_kvm_support = cpu_has_kvm_support, .disabled_by_bios = vmx_disabled_by_bios, @@ -11754,6 +11766,9 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { #endif .setup_mce = vmx_setup_mce, + + .prep_enter_smm = vmx_prep_enter_smm, + .post_leave_smm = vmx_post_leave_smm, }; static int __init vmx_init(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 272320e..21ade70 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5674,6 +5674,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, ctxt->have_exception = false; ctxt->exception.vector = -1; ctxt->perm_ok = false; + ctxt->left_smm = false; ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; @@ -5755,6 +5756,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, toggle_interruptibility(vcpu, ctxt->interruptibility); vcpu->arch.emulate_regs_need_sync_to_vcpu = false; kvm_rip_write(vcpu, ctxt->eip); + if (r == EMULATE_DONE && ctxt->left_smm) + kvm_x86_ops->post_leave_smm(vcpu); if (r == EMULATE_DONE && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) kvm_vcpu_do_singlestep(vcpu, &r); @@ -6614,6 +6617,9 @@ static void enter_smm(struct kvm_vcpu *vcpu) trace_kvm_enter_smm(vcpu->vcpu_id, vcpu->arch.smbase, true); vcpu->arch.hflags |= HF_SMM_MASK; memset(buf, 0, 512); + + kvm_x86_ops->prep_enter_smm(vcpu, buf); + if (guest_cpuid_has_longmode(vcpu)) enter_smm_save_state_64(vcpu, buf); else