From patchwork Fri Aug 7 09:49:38 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 39836 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n779tth5031466 for ; Fri, 7 Aug 2009 09:55:56 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756076AbZHGJu1 (ORCPT ); Fri, 7 Aug 2009 05:50:27 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756053AbZHGJu0 (ORCPT ); Fri, 7 Aug 2009 05:50:26 -0400 Received: from va3ehsobe005.messaging.microsoft.com ([216.32.180.15]:57452 "EHLO VA3EHSOBE006.bigfish.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755894AbZHGJuH (ORCPT ); Fri, 7 Aug 2009 05:50:07 -0400 Received: from mail123-va3-R.bigfish.com (10.7.14.253) by VA3EHSOBE006.bigfish.com (10.7.40.26) with Microsoft SMTP Server id 8.1.340.0; Fri, 7 Aug 2009 09:50:07 +0000 Received: from mail123-va3 (localhost.localdomain [127.0.0.1]) by mail123-va3-R.bigfish.com (Postfix) with ESMTP id D6ABD18B0141; Fri, 7 Aug 2009 09:50:06 +0000 (UTC) X-SpamScore: 10 X-BigFish: VPS10(za62pz10d1Izz1202hzzz32i203h43j65h) X-Spam-TCS-SCL: 4:0 X-FB-SS: 5, Received: by mail123-va3 (MessageSwitch) id 1249638605425073_20742; Fri, 7 Aug 2009 09:50:05 +0000 (UCT) Received: from svlb1extmailp02.amd.com (unknown [139.95.251.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail123-va3.bigfish.com (Postfix) with ESMTP id 39BF25A0046; Fri, 7 Aug 2009 09:50:05 +0000 (UTC) Received: from svlb1twp02.amd.com ([139.95.250.35]) by svlb1extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id n779nwTO012634; Fri, 7 Aug 2009 02:50:01 -0700 X-WSS-ID: 0KO03BA-04-L8F-01 Received: from SSVLEXBH2.amd.com (ssvlexbh2.amd.com [139.95.53.183]) by svlb1twp02.amd.com (Tumbleweed MailGate 3.5.1) with ESMTP id 2B04F1103C1; Fri, 7 Aug 2009 02:49:57 -0700 (PDT) Received: from SSVLEXMB1.amd.com ([139.95.53.181]) by SSVLEXBH2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 7 Aug 2009 02:50:00 -0700 Received: from SF30EXMB1.amd.com ([172.20.6.49]) by SSVLEXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 7 Aug 2009 02:50:00 -0700 Received: from seurexmb1.amd.com ([165.204.9.130]) by SF30EXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 7 Aug 2009 11:49:53 +0200 Received: from lemmy.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 7 Aug 2009 11:49:51 +0200 Received: by lemmy.amd.com (Postfix, from userid 41430) id BE90FC9FDA; Fri, 7 Aug 2009 11:49:50 +0200 (CEST) From: Joerg Roedel To: Avi Kivity CC: Alexander Graf , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 11/21] kvm/svm: get rid of nested_svm_vmexit_real Date: Fri, 7 Aug 2009 11:49:38 +0200 Message-ID: <1249638588-10982-12-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.6.3.3 In-Reply-To: <1249638588-10982-1-git-send-email-joerg.roedel@amd.com> References: <1249638588-10982-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 07 Aug 2009 09:49:51.0155 (UTC) FILETIME=[687A7030:01CA1744] MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch is the starting point of removing nested_svm_do from the nested svm code. The nested_svm_do function basically maps two guest physical pages to host virtual addresses and calls a passed function on it. This function pointer code flow is hard to read and not the best technical solution here. As a side effect this patch indroduces the nested_svm_[un]map helper functions. Signed-off-by: Joerg Roedel --- arch/x86/kvm/svm.c | 52 ++++++++++++++++++++++++++++++++++++++++------------ 1 files changed, 40 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index a85b0a2..1753a64 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1393,6 +1393,39 @@ static inline int nested_svm_intr(struct vcpu_svm *svm) return 0; } +static void *nested_svm_map(struct vcpu_svm *svm, u64 gpa, enum km_type idx) +{ + struct page *page; + + down_read(¤t->mm->mmap_sem); + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); + up_read(¤t->mm->mmap_sem); + + if (is_error_page(page)) + goto error; + + return kmap_atomic(page, idx); + +error: + kvm_release_page_clean(page); + kvm_inject_gp(&svm->vcpu, 0); + + return NULL; +} + +static void nested_svm_unmap(void *addr, enum km_type idx) +{ + struct page *page; + + if (!addr) + return; + + page = kmap_atomic_to_page(addr); + + kunmap_atomic(addr, idx); + kvm_release_page_dirty(page); +} + static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) { struct page *page; @@ -1600,13 +1633,16 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr dst->lbr_ctl = from->lbr_ctl; } -static int nested_svm_vmexit_real(struct vcpu_svm *svm, void *arg1, - void *arg2, void *opaque) +static int nested_svm_vmexit(struct vcpu_svm *svm) { - struct vmcb *nested_vmcb = (struct vmcb *)arg1; + struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; + nested_vmcb = nested_svm_map(svm, svm->nested.vmcb, KM_USER0); + if (!nested_vmcb) + return 1; + /* Give the current vmcb to the guest */ disable_gif(svm); @@ -1681,15 +1717,7 @@ static int nested_svm_vmexit_real(struct vcpu_svm *svm, void *arg1, /* Exit nested SVM mode */ svm->nested.vmcb = 0; - return 0; -} - -static int nested_svm_vmexit(struct vcpu_svm *svm) -{ - nsvm_printk("VMexit\n"); - if (nested_svm_do(svm, svm->nested.vmcb, 0, - NULL, nested_svm_vmexit_real)) - return 1; + nested_svm_unmap(nested_vmcb, KM_USER0); kvm_mmu_reset_context(&svm->vcpu); kvm_mmu_load(&svm->vcpu);