From patchwork Fri Mar 25 09:29:16 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 661471 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2P9Tv6U015749 for ; Fri, 25 Mar 2011 09:30:33 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933963Ab1CYJ37 (ORCPT ); Fri, 25 Mar 2011 05:29:59 -0400 Received: from ch1outboundpool.messaging.microsoft.com ([216.32.181.183]:31420 "EHLO ch1outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933837Ab1CYJ3d (ORCPT ); Fri, 25 Mar 2011 05:29:33 -0400 Received: from mail182-ch1-R.bigfish.com (216.32.181.173) by CH1EHSOBE017.bigfish.com (10.43.70.67) with Microsoft SMTP Server id 14.1.225.8; Fri, 25 Mar 2011 09:29:30 +0000 Received: from mail182-ch1 (localhost.localdomain [127.0.0.1]) by mail182-ch1-R.bigfish.com (Postfix) with ESMTP id 279188080B0; Fri, 25 Mar 2011 09:29:30 +0000 (UTC) X-SpamScore: -2 X-BigFish: VPS-2(zzbb2cKzz1202hzz8275bhz32i668h65h) X-Spam-TCS-SCL: 4:0 X-Forefront-Antispam-Report: KIP:(null); UIP:(null); IPVD:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI Received: from mail182-ch1 (localhost.localdomain [127.0.0.1]) by mail182-ch1 (MessageSwitch) id 1301045369752872_6922; Fri, 25 Mar 2011 09:29:29 +0000 (UTC) Received: from CH1EHSMHS034.bigfish.com (snatpool1.int.messaging.microsoft.com [10.43.68.251]) by mail182-ch1.bigfish.com (Postfix) with ESMTP id A86A514004F; Fri, 25 Mar 2011 09:29:29 +0000 (UTC) Received: from ausb3twp01.amd.com (163.181.249.108) by CH1EHSMHS034.bigfish.com (10.43.70.34) with Microsoft SMTP Server id 14.1.225.8; Fri, 25 Mar 2011 09:29:29 +0000 X-WSS-ID: 0LILX11-01-N1P-02 X-M-MSG: Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com [163.181.249.73]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ausb3twp01.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 2A6A310284E2; Fri, 25 Mar 2011 04:29:24 -0500 (CDT) Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com (163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.106.1; Fri, 25 Mar 2011 04:36:52 -0500 Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com (163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.83.0; Fri, 25 Mar 2011 04:29:27 -0500 Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com (172.24.4.3) with Microsoft SMTP Server id 8.3.83.0; Fri, 25 Mar 2011 05:29:26 -0400 Received: from lemmy.osrc.amd.com (lemmy.osrc.amd.com [165.204.15.93]) by gwo.osrc.amd.com (Postfix) with ESMTP id D576749C5A7; Fri, 25 Mar 2011 09:29:23 +0000 (GMT) Received: by lemmy.osrc.amd.com (Postfix, from userid 1000) id C83A2101BA4; Fri, 25 Mar 2011 10:29:23 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: , Joerg Roedel Subject: [PATCH 13/13] KVM: SVM: Remove nested sel_cr0_write handling code Date: Fri, 25 Mar 2011 10:29:16 +0100 Message-ID: <1301045356-25257-14-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1301045356-25257-1-git-send-email-joerg.roedel@amd.com> References: <1301045356-25257-1-git-send-email-joerg.roedel@amd.com> MIME-Version: 1.0 X-OriginatorOrg: amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 25 Mar 2011 09:30:33 +0000 (UTC) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 1672e3c..37c0060 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -93,14 +93,6 @@ struct nested_state { /* A VMEXIT is required but not yet emulated */ bool exit_required; - /* - * If we vmexit during an instruction emulation we need this to restore - * the l1 guest rip after the emulation - */ - unsigned long vmexit_rip; - unsigned long vmexit_rsp; - unsigned long vmexit_rax; - /* cache for intercepts of the guest */ u32 intercept_cr; u32 intercept_dr; @@ -1365,31 +1357,6 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { struct vcpu_svm *svm = to_svm(vcpu); - if (is_guest_mode(vcpu)) { - /* - * We are here because we run in nested mode, the host kvm - * intercepts cr0 writes but the l1 hypervisor does not. - * But the L1 hypervisor may intercept selective cr0 writes. - * This needs to be checked here. - */ - unsigned long old, new; - - /* Remove bits that would trigger a real cr0 write intercept */ - old = vcpu->arch.cr0 & SVM_CR0_SELECTIVE_MASK; - new = cr0 & SVM_CR0_SELECTIVE_MASK; - - if (old == new) { - /* cr0 write with ts and mp unchanged */ - svm->vmcb->control.exit_code = SVM_EXIT_CR0_SEL_WRITE; - if (nested_svm_exit_handled(svm) == NESTED_EXIT_DONE) { - svm->nested.vmexit_rip = kvm_rip_read(vcpu); - svm->nested.vmexit_rsp = kvm_register_read(vcpu, VCPU_REGS_RSP); - svm->nested.vmexit_rax = kvm_register_read(vcpu, VCPU_REGS_RAX); - return; - } - } - } - #ifdef CONFIG_X86_64 if (vcpu->arch.efer & EFER_LME) { if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) { @@ -2676,6 +2643,29 @@ static int emulate_on_interception(struct vcpu_svm *svm) return emulate_instruction(&svm->vcpu, 0) == EMULATE_DONE; } +bool check_selective_cr0_intercepted(struct vcpu_svm *svm, unsigned long val) +{ + unsigned long cr0 = svm->vcpu.arch.cr0; + bool ret = false; + u64 intercept; + + intercept = svm->nested.intercept; + + if (!is_guest_mode(&svm->vcpu) || + (!(intercept & (1ULL << INTERCEPT_SELECTIVE_CR0)))) + return false; + + cr0 &= ~SVM_CR0_SELECTIVE_MASK; + val &= ~SVM_CR0_SELECTIVE_MASK; + + if (cr0 ^ val) { + svm->vmcb->control.exit_code = SVM_EXIT_CR0_SEL_WRITE; + ret = (nested_svm_exit_handled(svm) == NESTED_EXIT_DONE); + } + + return ret; +} + #define CR_VALID (1ULL << 63) static int cr_interception(struct vcpu_svm *svm) @@ -2699,7 +2689,8 @@ static int cr_interception(struct vcpu_svm *svm) val = kvm_register_read(&svm->vcpu, reg); switch (cr) { case 0: - err = kvm_set_cr0(&svm->vcpu, val); + if (!check_selective_cr0_intercepted(svm, val)) + err = kvm_set_cr0(&svm->vcpu, val); break; case 3: err = kvm_set_cr3(&svm->vcpu, val); @@ -2744,23 +2735,6 @@ static int cr_interception(struct vcpu_svm *svm) return 1; } -static int cr0_write_interception(struct vcpu_svm *svm) -{ - struct kvm_vcpu *vcpu = &svm->vcpu; - int r; - - r = cr_interception(svm); - - if (svm->nested.vmexit_rip) { - kvm_register_write(vcpu, VCPU_REGS_RIP, svm->nested.vmexit_rip); - kvm_register_write(vcpu, VCPU_REGS_RSP, svm->nested.vmexit_rsp); - kvm_register_write(vcpu, VCPU_REGS_RAX, svm->nested.vmexit_rax); - svm->nested.vmexit_rip = 0; - } - - return r; -} - static int dr_interception(struct vcpu_svm *svm) { int reg, dr; @@ -3048,7 +3022,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_READ_CR4] = cr_interception, [SVM_EXIT_READ_CR8] = cr_interception, [SVM_EXIT_CR0_SEL_WRITE] = emulate_on_interception, - [SVM_EXIT_WRITE_CR0] = cr0_write_interception, + [SVM_EXIT_WRITE_CR0] = cr_interception, [SVM_EXIT_WRITE_CR3] = cr_interception, [SVM_EXIT_WRITE_CR4] = cr_interception, [SVM_EXIT_WRITE_CR8] = cr8_write_interception,