From patchwork Fri Sep 10 15:31:00 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 169742 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o8AFencN004485 for ; Fri, 10 Sep 2010 15:40:49 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753495Ab0IJPc1 (ORCPT ); Fri, 10 Sep 2010 11:32:27 -0400 Received: from tx2ehsobe001.messaging.microsoft.com ([65.55.88.11]:56247 "EHLO TX2EHSOBE001.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751325Ab0IJPc0 (ORCPT ); Fri, 10 Sep 2010 11:32:26 -0400 Received: from mail52-tx2-R.bigfish.com (10.9.14.243) by TX2EHSOBE001.bigfish.com (10.9.40.21) with Microsoft SMTP Server id 8.1.340.0; Fri, 10 Sep 2010 15:32:26 +0000 Received: from mail52-tx2 (localhost.localdomain [127.0.0.1]) by mail52-tx2-R.bigfish.com (Postfix) with ESMTP id 1995C5E8631; Fri, 10 Sep 2010 15:32:26 +0000 (UTC) X-SpamScore: 1 X-BigFish: VS1(zzzz1202hzz8275bhz32i2a8h87h43h61h) X-Spam-TCS-SCL: 0:0 X-FB-DOMAIN-IP-MATCH: fail Received: from mail52-tx2 (localhost.localdomain [127.0.0.1]) by mail52-tx2 (MessageSwitch) id 1284132744845548_2492; Fri, 10 Sep 2010 15:32:24 +0000 (UTC) Received: from TX2EHSMHS014.bigfish.com (unknown [10.9.14.249]) by mail52-tx2.bigfish.com (Postfix) with ESMTP id C8D93518052; Fri, 10 Sep 2010 15:32:24 +0000 (UTC) Received: from ausb3extmailp01.amd.com (163.181.251.8) by TX2EHSMHS014.bigfish.com (10.9.99.114) with Microsoft SMTP Server (TLS) id 14.0.482.44; Fri, 10 Sep 2010 15:32:22 +0000 Received: from ausb3twp02.amd.com ([163.181.250.38]) by ausb3extmailp01.amd.com (Switch-3.2.7/Switch-3.2.7) with SMTP id o8AFXZWq015185; Fri, 10 Sep 2010 10:34:34 -0500 X-WSS-ID: 0L8JF41-02-JQ7-02 X-M-MSG: Received: from sausexhtp01.amd.com (sausexhtp01.amd.com [163.181.3.165]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by ausb3twp02.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 20B86C8B50; Fri, 10 Sep 2010 10:31:13 -0500 (CDT) Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp01.amd.com (163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.83.0; Fri, 10 Sep 2010 10:31:17 -0500 Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com (172.24.4.4) with Microsoft SMTP Server id 8.3.83.0; Fri, 10 Sep 2010 11:31:17 -0400 Received: from lemmy.osrc.amd.com (lemmy.osrc.amd.com [165.204.15.93]) by gwo.osrc.amd.com (Postfix) with ESMTP id 97AD249C14A; Fri, 10 Sep 2010 16:31:16 +0100 (BST) Received: by lemmy.osrc.amd.com (Postfix, from userid 1000) id CE4C2A00C1; Fri, 10 Sep 2010 17:31:35 +0200 (CEST) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: , , Joerg Roedel Subject: [PATCH 23/29] KVM: MMU: Allow long mode shadows for legacy page tables Date: Fri, 10 Sep 2010 17:31:00 +0200 Message-ID: <1284132667-18620-24-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1284132667-18620-1-git-send-email-joerg.roedel@amd.com> References: <1284132667-18620-1-git-send-email-joerg.roedel@amd.com> MIME-Version: 1.0 X-Reverse-DNS: unknown Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Fri, 10 Sep 2010 15:40:50 +0000 (UTC) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 33946ed..8647578 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -256,6 +256,7 @@ struct kvm_mmu { bool direct_map; u64 *pae_root; + u64 *lm_root; u64 rsvd_bits_mask[2][4]; u64 pdptrs[4]; /* pae */ diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9cd5a71..dd76765 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1504,6 +1504,12 @@ static void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator, iterator->addr = addr; iterator->shadow_addr = vcpu->arch.mmu.root_hpa; iterator->level = vcpu->arch.mmu.shadow_root_level; + + if (iterator->level == PT64_ROOT_LEVEL && + vcpu->arch.mmu.root_level < PT64_ROOT_LEVEL && + !vcpu->arch.mmu.direct_map) + --iterator->level; + if (iterator->level == PT32E_ROOT_LEVEL) { iterator->shadow_addr = vcpu->arch.mmu.pae_root[(addr >> 30) & 3]; @@ -2314,7 +2320,9 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu) if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) return; spin_lock(&vcpu->kvm->mmu_lock); - if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) { + if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL && + (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL || + vcpu->arch.mmu.direct_map)) { hpa_t root = vcpu->arch.mmu.root_hpa; sp = page_header(root); @@ -2394,10 +2402,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) { - int i; - gfn_t root_gfn; struct kvm_mmu_page *sp; - u64 pdptr; + u64 pdptr, pm_mask; + gfn_t root_gfn; + int i; root_gfn = vcpu->arch.mmu.get_cr3(vcpu) >> PAGE_SHIFT; @@ -2426,8 +2434,13 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) /* * We shadow a 32 bit page table. This may be a legacy 2-level - * or a PAE 3-level page table. + * or a PAE 3-level page table. In either case we need to be aware that + * the shadow page table may be a PAE or a long mode page table. */ + pm_mask = PT_PRESENT_MASK; + if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) + pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; + for (i = 0; i < 4; ++i) { hpa_t root = vcpu->arch.mmu.pae_root[i]; @@ -2451,9 +2464,35 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) ++sp->root_count; spin_unlock(&vcpu->kvm->mmu_lock); - vcpu->arch.mmu.pae_root[i] = root | PT_PRESENT_MASK; + vcpu->arch.mmu.pae_root[i] = root | pm_mask; + vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root); } - vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root); + + /* + * If we shadow a 32 bit page table with a long mode page + * table we enter this path. + */ + if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) { + if (vcpu->arch.mmu.lm_root == NULL) { + /* + * The additional page necessary for this is only + * allocated on demand. + */ + + u64 *lm_root; + + lm_root = (void*)get_zeroed_page(GFP_KERNEL); + if (lm_root == NULL) + return 1; + + lm_root[0] = __pa(vcpu->arch.mmu.pae_root) | pm_mask; + + vcpu->arch.mmu.lm_root = lm_root; + } + + vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.lm_root); + } + return 0; } @@ -2470,9 +2509,12 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu) int i; struct kvm_mmu_page *sp; + if (vcpu->arch.mmu.direct_map) + return; + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) return; - if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) { + if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) { hpa_t root = vcpu->arch.mmu.root_hpa; sp = page_header(root); mmu_sync_children(vcpu, sp); @@ -3253,6 +3295,8 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp); static void free_mmu_pages(struct kvm_vcpu *vcpu) { free_page((unsigned long)vcpu->arch.mmu.pae_root); + if (vcpu->arch.mmu.lm_root != NULL) + free_page((unsigned long)vcpu->arch.mmu.lm_root); } static int alloc_mmu_pages(struct kvm_vcpu *vcpu)