From patchwork Mon Nov 29 15:38:09 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 364422 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id oATFdeKf013229 for ; Mon, 29 Nov 2010 15:39:41 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754631Ab0K2PjQ (ORCPT ); Mon, 29 Nov 2010 10:39:16 -0500 Received: from tx2ehsobe002.messaging.microsoft.com ([65.55.88.12]:20023 "EHLO TX2EHSOBE003.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752315Ab0K2Pif (ORCPT ); Mon, 29 Nov 2010 10:38:35 -0500 Received: from mail110-tx2-R.bigfish.com (10.9.14.248) by TX2EHSOBE003.bigfish.com (10.9.40.23) with Microsoft SMTP Server id 14.1.225.8; Mon, 29 Nov 2010 15:38:34 +0000 Received: from mail110-tx2 (localhost.localdomain [127.0.0.1]) by mail110-tx2-R.bigfish.com (Postfix) with ESMTP id 6E967132026E; Mon, 29 Nov 2010 15:38:34 +0000 (UTC) X-SpamScore: 5 X-BigFish: VPS5(z1039ozbb2cKzz1202hzz8275bhz32i691h668h67dh63h) X-Spam-TCS-SCL: 2:0 X-Forefront-Antispam-Report: KIP:(null); UIP:(null); IPVD:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI Received: from mail110-tx2 (localhost.localdomain [127.0.0.1]) by mail110-tx2 (MessageSwitch) id 1291045113735257_2701; Mon, 29 Nov 2010 15:38:33 +0000 (UTC) Received: from TX2EHSMHS031.bigfish.com (unknown [10.9.14.251]) by mail110-tx2.bigfish.com (Postfix) with ESMTP id 99FF7125005C; Mon, 29 Nov 2010 15:38:33 +0000 (UTC) Received: from ausb3twp01.amd.com (163.181.249.108) by TX2EHSMHS031.bigfish.com (10.9.99.131) with Microsoft SMTP Server id 14.1.225.8; Mon, 29 Nov 2010 15:38:32 +0000 X-WSS-ID: 0LCNKS3-01-73U-02 X-M-MSG: Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com [163.181.249.72]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ausb3twp01.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 2087A1028556; Mon, 29 Nov 2010 09:38:27 -0600 (CST) Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com (163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.106.1; Mon, 29 Nov 2010 09:39:55 -0600 Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com (163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.83.0; Mon, 29 Nov 2010 09:38:29 -0600 Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com (172.24.4.3) with Microsoft SMTP Server id 8.3.83.0; Mon, 29 Nov 2010 10:38:28 -0500 Received: from lemmy.osrc.amd.com (lemmy.osrc.amd.com [165.204.15.93]) by gwo.osrc.amd.com (Postfix) with ESMTP id B19B749C223; Mon, 29 Nov 2010 15:38:27 +0000 (GMT) Received: by lemmy.osrc.amd.com (Postfix, from userid 1000) id 18A45100DFA; Mon, 29 Nov 2010 16:38:28 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: , , Joerg Roedel Subject: [PATCH 2/3] KVM: SVM: Make Use of the generic guest-mode functions Date: Mon, 29 Nov 2010 16:38:09 +0100 Message-ID: <1291045090-2714-3-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1291045090-2714-1-git-send-email-joerg.roedel@amd.com> References: <1291045090-2714-1-git-send-email-joerg.roedel@amd.com> MIME-Version: 1.0 X-OriginatorOrg: amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Mon, 29 Nov 2010 15:39:41 +0000 (UTC) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 2fd2f4d..3376fca 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -192,11 +192,6 @@ static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) return container_of(vcpu, struct vcpu_svm, vcpu); } -static inline bool is_nested(struct vcpu_svm *svm) -{ - return svm->nested.vmcb; -} - static inline void enable_gif(struct vcpu_svm *svm) { svm->vcpu.arch.hflags |= HF_GIF_MASK; @@ -727,7 +722,7 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) struct vcpu_svm *svm = to_svm(vcpu); u64 g_tsc_offset = 0; - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(vcpu)) { g_tsc_offset = svm->vmcb->control.tsc_offset - svm->nested.hsave->control.tsc_offset; svm->nested.hsave->control.tsc_offset = offset; @@ -741,7 +736,7 @@ static void svm_adjust_tsc_offset(struct kvm_vcpu *vcpu, s64 adjustment) struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.tsc_offset += adjustment; - if (is_nested(svm)) + if (kvm_vcpu_is_gm(vcpu)) svm->nested.hsave->control.tsc_offset += adjustment; } @@ -1209,7 +1204,7 @@ static void update_cr0_intercept(struct vcpu_svm *svm) if (gcr0 == *hcr0 && svm->vcpu.fpu_active) { vmcb->control.intercept_cr_read &= ~INTERCEPT_CR0_MASK; vmcb->control.intercept_cr_write &= ~INTERCEPT_CR0_MASK; - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(&svm->vcpu)) { struct vmcb *hsave = svm->nested.hsave; hsave->control.intercept_cr_read &= ~INTERCEPT_CR0_MASK; @@ -1220,7 +1215,7 @@ static void update_cr0_intercept(struct vcpu_svm *svm) } else { svm->vmcb->control.intercept_cr_read |= INTERCEPT_CR0_MASK; svm->vmcb->control.intercept_cr_write |= INTERCEPT_CR0_MASK; - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(&svm->vcpu)) { struct vmcb *hsave = svm->nested.hsave; hsave->control.intercept_cr_read |= INTERCEPT_CR0_MASK; @@ -1233,7 +1228,7 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { struct vcpu_svm *svm = to_svm(vcpu); - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(vcpu)) { /* * We are here because we run in nested mode, the host kvm * intercepts cr0 writes but the l1 hypervisor does not. @@ -1471,7 +1466,7 @@ static void svm_fpu_activate(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); u32 excp; - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(vcpu)) { u32 h_excp, n_excp; h_excp = svm->nested.hsave->control.intercept_exceptions; @@ -1700,7 +1695,7 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, { int vmexit; - if (!is_nested(svm)) + if (!kvm_vcpu_is_gm(&svm->vcpu)) return 0; svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr; @@ -1718,7 +1713,7 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, /* This function returns true if it is save to enable the irq window */ static inline bool nested_svm_intr(struct vcpu_svm *svm) { - if (!is_nested(svm)) + if (!kvm_vcpu_is_gm(&svm->vcpu)) return true; if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) @@ -1757,7 +1752,7 @@ static inline bool nested_svm_intr(struct vcpu_svm *svm) /* This function returns true if it is save to enable the nmi window */ static inline bool nested_svm_nmi(struct vcpu_svm *svm) { - if (!is_nested(svm)) + if (!kvm_vcpu_is_gm(&svm->vcpu)) return true; if (!(svm->nested.intercept & (1ULL << INTERCEPT_NMI))) @@ -1994,7 +1989,8 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) if (!nested_vmcb) return 1; - /* Exit nested SVM mode */ + /* Exit Guest-Mode */ + kvm_vcpu_leave_gm(&svm->vcpu); svm->nested.vmcb = 0; /* Give the current vmcb to the guest */ @@ -2302,7 +2298,9 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) nested_svm_unmap(page); - /* nested_vmcb is our indicator if nested SVM is activated */ + /* Enter Guest-Mode */ + kvm_vcpu_enter_gm(&svm->vcpu); + svm->nested.vmcb = vmcb_gpa; enable_gif(svm); @@ -2588,7 +2586,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) case MSR_IA32_TSC: { u64 tsc_offset; - if (is_nested(svm)) + if (kvm_vcpu_is_gm(vcpu)) tsc_offset = svm->nested.hsave->control.tsc_offset; else tsc_offset = svm->vmcb->control.tsc_offset; @@ -3002,7 +3000,7 @@ static int handle_exit(struct kvm_vcpu *vcpu) return 1; } - if (is_nested(svm)) { + if (kvm_vcpu_is_gm(vcpu)) { int vmexit; trace_kvm_nested_vmexit(svm->vmcb->save.rip, exit_code, @@ -3109,7 +3107,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { struct vcpu_svm *svm = to_svm(vcpu); - if (is_nested(svm) && (vcpu->arch.hflags & HF_VINTR_MASK)) + if (kvm_vcpu_is_gm(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) return; if (irr == -1) @@ -3163,7 +3161,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu) ret = !!(vmcb->save.rflags & X86_EFLAGS_IF); - if (is_nested(svm)) + if (kvm_vcpu_is_gm(vcpu)) return ret && !(svm->vcpu.arch.hflags & HF_VINTR_MASK); return ret; @@ -3220,7 +3218,7 @@ static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - if (is_nested(svm) && (vcpu->arch.hflags & HF_VINTR_MASK)) + if (kvm_vcpu_is_gm(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) return; if (!(svm->vmcb->control.intercept_cr_write & INTERCEPT_CR8_MASK)) { @@ -3234,7 +3232,7 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); u64 cr8; - if (is_nested(svm) && (vcpu->arch.hflags & HF_VINTR_MASK)) + if (kvm_vcpu_is_gm(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) return; cr8 = kvm_get_cr8(vcpu); @@ -3614,7 +3612,7 @@ static void svm_fpu_deactivate(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.intercept_exceptions |= 1 << NM_VECTOR; - if (is_nested(svm)) + if (kvm_vcpu_is_gm(vcpu)) svm->nested.hsave->control.intercept_exceptions |= 1 << NM_VECTOR; update_cr0_intercept(svm); }