From patchwork Sat Sep 15 15:35:21 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1462181 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id C1D243FC33 for ; Sat, 15 Sep 2012 15:41:57 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TCuQu-00006e-Qe; Sat, 15 Sep 2012 15:37:25 +0000 Received: from mail-qc0-f177.google.com ([209.85.216.177]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TCuOw-0007k9-W1 for linux-arm-kernel@lists.infradead.org; Sat, 15 Sep 2012 15:35:28 +0000 Received: by qcsu28 with SMTP id u28so3767554qcs.36 for ; Sat, 15 Sep 2012 08:35:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=subject:to:from:date:message-id:in-reply-to:references:user-agent :mime-version:content-type:content-transfer-encoding :x-gm-message-state; bh=uFRmtJ3dAbntLv6cqd8DNKOgYX1JbncrqwtNXWI7xUI=; b=erxNthzZ8BJa5ThReoFozCinsSE+cRXoBU2wc0WQmiOPRcNO3TuyLl6ov70YIK9AYJ OQyXi9tM39gKRTHEFOIVeIjFeYSY6lClHgbi/TKIeMWNQwLBo56BHsl9zKm8PwUmqsnJ UgJJ11Aov98H4Jg3SFA7DJSZhPzRI2QDeockESvDndVemkCBq0oSCn2FYy/x7jpJch89 BSZwHeUOCxbrgCiw6eJ+ABthKTDoOkSuJmIKqkFROKAoEGcQNLBehQT/BzrFsg36uOZA IMUuPUElNx6smnPnleVkbEkiK+KUpq3sEMvxIGWbpFKgNZ2MMzREgwNjLooq4uCFvEtd bdUA== Received: by 10.224.39.147 with SMTP id g19mr15396057qae.46.1347723322294; Sat, 15 Sep 2012 08:35:22 -0700 (PDT) Received: from [127.0.1.1] (pool-72-80-83-148.nycmny.fios.verizon.net. [72.80.83.148]) by mx.google.com with ESMTPS id d11sm4985986qaj.18.2012.09.15.08.35.21 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 15 Sep 2012 08:35:21 -0700 (PDT) Subject: [PATCH 08/15] KVM: ARM: Memory virtualization setup To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu From: Christoffer Dall Date: Sat, 15 Sep 2012 11:35:21 -0400 Message-ID: <20120915153521.21241.56054.stgit@ubuntu> In-Reply-To: <20120915153359.21241.86002.stgit@ubuntu> References: <20120915153359.21241.86002.stgit@ubuntu> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Gm-Message-State: ALoCoQmLv5XMfNMT5pB+mvTtBNctYoN6Wi5x7H2YRVBDv/V6YxFUUFriPqGSRZzBD7uJiIlakqRm X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.216.177 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Christoffer Dall This commit introduces the framework for guest memory management through the use of 2nd stage translation. Each VM has a pointer to a level-1 table (the pgd field in struct kvm_arch) which is used for the 2nd stage translations. Entries are added when handling guest faults (later patch) and the table itself can be allocated and freed through the following functions implemented in arch/arm/kvm/arm_mmu.c: - kvm_alloc_stage2_pgd(struct kvm *kvm); - kvm_free_stage2_pgd(struct kvm *kvm); Each entry in TLBs and caches are tagged with a VMID identifier in addition to ASIDs. The VMIDs are assigned consecutively to VMs in the order that VMs are executed, and caches and tlbs are invalidated when the VMID space has been used to allow for more than 255 simultaenously running guests. The 2nd stage pgd is allocated in kvm_arch_init_vm(). The table is freed in kvm_arch_destroy_vm(). Both functions are called from the main KVM code. We pre-allocate page table memory to be able to synchronize using a spinlock and be called under rcu_read_lock from the MMU notifiers. We steal the mmu_memory_cache implementation from x86 and adapt for our specific usage. We support MMU notifiers (thanks to Marc Zyngier) through kvm_unmap_hva and kvm_set_spte_hva. Finally, define kvm_phys_addr_ioremap() to map a device at a guest IPA, which is used by VGIC support to map the virtual CPU interface registers to the guest. This support is added by Marc Zyngier. Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_asm.h | 2 arch/arm/include/asm/kvm_host.h | 18 ++ arch/arm/include/asm/kvm_mmu.h | 9 + arch/arm/kvm/Kconfig | 1 arch/arm/kvm/arm.c | 38 ++++ arch/arm/kvm/exports.c | 1 arch/arm/kvm/interrupts.S | 8 + arch/arm/kvm/mmu.c | 377 +++++++++++++++++++++++++++++++++++++++ 8 files changed, 453 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 6c40e55..201ec1f 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -29,6 +29,7 @@ #define ARM_EXCEPTION_HVC 7 #ifndef __ASSEMBLY__ +struct kvm; struct kvm_vcpu; extern char __kvm_hyp_init[]; @@ -43,6 +44,7 @@ extern char __kvm_hyp_code_start[]; extern char __kvm_hyp_code_end[]; extern void __kvm_flush_vm_context(void); +extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); #endif diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 24959f4..f0c72b9 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -169,4 +169,22 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); struct kvm_one_reg; int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); + +#define KVM_ARCH_WANT_MMU_NOTIFIER +struct kvm; +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end); +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); + +/* We do not have shadow page tables, hence the empty hooks */ +static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva) +{ + return 0; +} + +static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) +{ + return 0; +} #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 8252921..11f4c3a 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -33,4 +33,13 @@ int create_hyp_mappings(void *from, void *to); int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_hyp_pmds(void); +int kvm_alloc_stage2_pgd(struct kvm *kvm); +void kvm_free_stage2_pgd(struct kvm *kvm); +int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, + phys_addr_t pa, unsigned long size); + +int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); + +void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); + #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig index a07ddcc..47c5500 100644 --- a/arch/arm/kvm/Kconfig +++ b/arch/arm/kvm/Kconfig @@ -36,6 +36,7 @@ config KVM_ARM_HOST depends on KVM depends on MMU depends on CPU_V7 && ARM_VIRT_EXT + select MMU_NOTIFIER ---help--- Provides host support for ARM processors. diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 6f35aec..b97ebd0 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -82,12 +82,34 @@ void kvm_arch_sync_events(struct kvm *kvm) { } +/** + * kvm_arch_init_vm - initializes a VM data structure + * @kvm: pointer to the KVM struct + */ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { + int ret = 0; + if (type) return -EINVAL; - return 0; + ret = kvm_alloc_stage2_pgd(kvm); + if (ret) + goto out_fail_alloc; + spin_lock_init(&kvm->arch.pgd_lock); + + ret = create_hyp_mappings(kvm, kvm + 1); + if (ret) + goto out_free_stage2_pgd; + + /* Mark the initial VMID generation invalid */ + kvm->arch.vmid_gen = 0; + + return ret; +out_free_stage2_pgd: + kvm_free_stage2_pgd(kvm); +out_fail_alloc: + return ret; } int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) @@ -105,10 +127,16 @@ int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages) return 0; } +/** + * kvm_arch_destroy_vm - destroy the VM data structure + * @kvm: pointer to the KVM struct + */ void kvm_arch_destroy_vm(struct kvm *kvm) { int i; + kvm_free_stage2_pgd(kvm); + for (i = 0; i < KVM_MAX_VCPUS; ++i) { if (kvm->vcpus[i]) { kvm_arch_vcpu_free(kvm->vcpus[i]); @@ -190,7 +218,13 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) if (err) goto free_vcpu; + err = create_hyp_mappings(vcpu, vcpu + 1); + if (err) + goto vcpu_uninit; + return vcpu; +vcpu_uninit: + kvm_vcpu_uninit(vcpu); free_vcpu: kmem_cache_free(kvm_vcpu_cache, vcpu); out: @@ -199,6 +233,8 @@ out: void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) { + kvm_mmu_free_memory_caches(vcpu); + kmem_cache_free(kvm_vcpu_cache, vcpu); } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/arm/kvm/exports.c b/arch/arm/kvm/exports.c index 8ebdf07..f39f823 100644 --- a/arch/arm/kvm/exports.c +++ b/arch/arm/kvm/exports.c @@ -33,5 +33,6 @@ EXPORT_SYMBOL_GPL(__kvm_hyp_code_end); EXPORT_SYMBOL_GPL(__kvm_vcpu_run); EXPORT_SYMBOL_GPL(__kvm_flush_vm_context); +EXPORT_SYMBOL_GPL(__kvm_tlb_flush_vmid); EXPORT_SYMBOL_GPL(smp_send_reschedule); diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S index bf09801..edf9ed5 100644 --- a/arch/arm/kvm/interrupts.S +++ b/arch/arm/kvm/interrupts.S @@ -31,6 +31,14 @@ __kvm_hyp_code_start: .globl __kvm_hyp_code_start @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ +@ Flush per-VMID TLBs +@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ + +ENTRY(__kvm_tlb_flush_vmid) + bx lr +ENDPROC(__kvm_tlb_flush_vmid) + +@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ Flush TLBs and instruction caches of current CPU for all VMIDs @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 6a7dfd4..ea17a97 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -23,10 +23,43 @@ #include #include #include +#include #include static DEFINE_MUTEX(kvm_hyp_pgd_mutex); +static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, + int min, int max) +{ + void *page; + + BUG_ON(max > KVM_NR_MEM_OBJS); + if (cache->nobjs >= min) + return 0; + while (cache->nobjs < max) { + page = (void *)__get_free_page(PGALLOC_GFP); + if (!page) + return -ENOMEM; + cache->objects[cache->nobjs++] = page; + } + return 0; +} + +static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) +{ + while (mc->nobjs) + free_page((unsigned long)mc->objects[--mc->nobjs]); +} + +static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) +{ + void *p; + + BUG_ON(!mc || !mc->nobjs); + p = mc->objects[--mc->nobjs]; + return p; +} + static void free_ptes(pmd_t *pmd, unsigned long addr) { pte_t *pte; @@ -200,7 +233,351 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t addr) return __create_hyp_mappings(from, to, &pfn); } +/** + * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation. + * @kvm: The KVM struct pointer for the VM. + * + * Allocates the 1st level table only of size defined by PGD2_ORDER (can + * support either full 40-bit input addresses or limited to 32-bit input + * addresses). Clears the allocated pages. + * + * Note we don't need locking here as this is only called when the VM is + * created, which can only be done once. + */ +int kvm_alloc_stage2_pgd(struct kvm *kvm) +{ + pgd_t *pgd; + + if (kvm->arch.pgd != NULL) { + kvm_err("kvm_arch already initialized?\n"); + return -EINVAL; + } + + pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, PGD2_ORDER); + if (!pgd) + return -ENOMEM; + + memset(pgd, 0, PTRS_PER_PGD2 * sizeof(pgd_t)); + clean_dcache_area(pgd, PTRS_PER_PGD2 * sizeof(pgd_t)); + kvm->arch.pgd = pgd; + + return 0; +} + +static void free_guest_pages(pte_t *pte, unsigned long addr) +{ + unsigned int i; + struct page *pte_page; + + pte_page = virt_to_page(pte); + + for (i = 0; i < PTRS_PER_PTE; i++) { + if (pte_present(*pte)) + put_page(pte_page); + pte++; + } + + WARN_ON(page_count(pte_page) != 1); +} + +static void free_stage2_ptes(pmd_t *pmd, unsigned long addr) +{ + unsigned int i; + pte_t *pte; + struct page *pmd_page; + + pmd_page = virt_to_page(pmd); + + for (i = 0; i < PTRS_PER_PMD; i++, addr += PMD_SIZE) { + BUG_ON(pmd_sect(*pmd)); + if (!pmd_none(*pmd) && pmd_table(*pmd)) { + pte = pte_offset_kernel(pmd, addr); + free_guest_pages(pte, addr); + pte_free_kernel(NULL, pte); + + put_page(pmd_page); + } + pmd++; + } + + WARN_ON(page_count(pmd_page) != 1); +} + +/** + * kvm_free_stage2_pgd - free all stage-2 tables + * @kvm: The KVM struct pointer for the VM. + * + * Walks the level-1 page table pointed to by kvm->arch.pgd and frees all + * underlying level-2 and level-3 tables before freeing the actual level-1 table + * and setting the struct pointer to NULL. + * + * Note we don't need locking here as this is only called when the VM is + * destroyed, which can only be done once. + */ +void kvm_free_stage2_pgd(struct kvm *kvm) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + unsigned long long i, addr; + struct page *pud_page; + + if (kvm->arch.pgd == NULL) + return; + + /* + * We do this slightly different than other places, since we need more + * than 32 bits and for instance pgd_addr_end converts to unsigned long. + */ + addr = 0; + for (i = 0; i < PTRS_PER_PGD2; i++) { + addr = i * (unsigned long long)PGDIR_SIZE; + pgd = kvm->arch.pgd + i; + pud = pud_offset(pgd, addr); + pud_page = virt_to_page(pud); + + if (pud_none(*pud)) + continue; + + BUG_ON(pud_bad(*pud)); + + pmd = pmd_offset(pud, addr); + free_stage2_ptes(pmd, addr); + pmd_free(NULL, pmd); + put_page(pud_page); + } + + WARN_ON(page_count(pud_page) != 1); + free_pages((unsigned long)kvm->arch.pgd, PGD2_ORDER); + kvm->arch.pgd = NULL; +} + +/** + * stage2_clear_pte -- Clear a stage-2 PTE. + * @kvm: The VM pointer + * @addr: The physical address of the PTE + * + * Clear a stage-2 PTE, lowering the various ref-counts. Also takes + * care of invalidating the TLBs. Must be called while holding + * pgd_lock, otherwise another faulting VCPU may come in and mess + * things behind our back. + */ +static void stage2_clear_pte(struct kvm *kvm, phys_addr_t addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + struct page *page; + + pgd = kvm->arch.pgd + pgd_index(addr); + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + return; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return; + + pte = pte_offset_kernel(pmd, addr); + set_pte_ext(pte, __pte(0), 0); + + page = virt_to_page(pte); + put_page(page); + if (page_count(page) != 1) { + __kvm_tlb_flush_vmid(kvm); + return; + } + + /* Need to remove pte page */ + pmd_clear(pmd); + pte_free_kernel(NULL, (pte_t *)((unsigned long)pte & PAGE_MASK)); + + page = virt_to_page(pmd); + put_page(page); + if (page_count(page) != 1) { + __kvm_tlb_flush_vmid(kvm); + return; + } + + pud_clear(pud); + pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK)); + + page = virt_to_page(pud); + put_page(page); + __kvm_tlb_flush_vmid(kvm); +} + +static void stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + phys_addr_t addr, const pte_t *new_pte) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte, old_pte; + + /* Create 2nd stage page table mapping - Level 1 */ + pgd = kvm->arch.pgd + pgd_index(addr); + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) { + if (!cache) + return; /* ignore calls from kvm_set_spte_hva */ + pmd = mmu_memory_cache_alloc(cache); + pud_populate(NULL, pud, pmd); + pmd += pmd_index(addr); + get_page(virt_to_page(pud)); + } else + pmd = pmd_offset(pud, addr); + + /* Create 2nd stage page table mapping - Level 2 */ + if (pmd_none(*pmd)) { + if (!cache) + return; /* ignore calls from kvm_set_spte_hva */ + pte = mmu_memory_cache_alloc(cache); + clean_pte_table(pte); + pmd_populate_kernel(NULL, pmd, pte); + pte += pte_index(addr); + get_page(virt_to_page(pmd)); + } else + pte = pte_offset_kernel(pmd, addr); + + /* Create 2nd stage page table mapping - Level 3 */ + old_pte = *pte; + set_pte_ext(pte, *new_pte, 0); + if (pte_present(old_pte)) + __kvm_tlb_flush_vmid(kvm); + else + get_page(virt_to_page(pte)); +} + +/** + * kvm_phys_addr_ioremap - map a device range to guest IPA + * + * @kvm: The KVM pointer + * @guest_ipa: The IPA at which to insert the mapping + * @pa: The physical address of the device + * @size: The size of the mapping + */ +int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, + phys_addr_t pa, unsigned long size) +{ + phys_addr_t addr, end; + pgprot_t prot; + int ret = 0; + unsigned long pfn; + struct kvm_mmu_memory_cache cache = { 0, }; + + end = (guest_ipa + size + PAGE_SIZE - 1) & PAGE_MASK; + prot = __pgprot(get_mem_type_prot_pte(MT_DEVICE) | L_PTE_USER | + L_PTE2_READ | L_PTE2_WRITE); + pfn = __phys_to_pfn(pa); + + for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) { + pte_t pte = pfn_pte(pfn, prot); + + ret = mmu_topup_memory_cache(&cache, 2, 2); + if (ret) + goto out; + spin_lock(&kvm->arch.pgd_lock); + stage2_set_pte(kvm, &cache, addr, &pte); + spin_unlock(&kvm->arch.pgd_lock); + + pfn++; + } + +out: + mmu_free_memory_cache(&cache); + return ret; +} + int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) { return -EINVAL; } + +static void handle_hva_to_gpa(struct kvm *kvm, unsigned long hva, + void (*handler)(struct kvm *kvm, unsigned long hva, + gpa_t gpa, void *data), + void *data) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + + slots = kvm_memslots(kvm); + + /* we only care about the pages that the guest sees */ + kvm_for_each_memslot(memslot, slots) { + unsigned long start = memslot->userspace_addr; + unsigned long end; + + end = start + (memslot->npages << PAGE_SHIFT); + if (hva >= start && hva < end) { + gpa_t gpa; + gpa_t gpa_offset = hva - start; + gpa = (memslot->base_gfn << PAGE_SHIFT) + gpa_offset; + handler(kvm, hva, gpa, data); + } + } +} + +static void kvm_unmap_hva_handler(struct kvm *kvm, unsigned long hva, + gpa_t gpa, void *data) +{ + spin_lock(&kvm->arch.pgd_lock); + stage2_clear_pte(kvm, gpa); + spin_unlock(&kvm->arch.pgd_lock); +} + +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva) +{ + if (!kvm->arch.pgd) + return 0; + + handle_hva_to_gpa(kvm, hva, &kvm_unmap_hva_handler, NULL); + + return 0; +} + +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end) +{ + unsigned long addr; + int ret; + + BUG_ON((start | end) & (~PAGE_MASK)); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + ret = kvm_unmap_hva(kvm, addr); + if (ret) + return ret; + } + + return 0; +} + +static void kvm_set_spte_handler(struct kvm *kvm, unsigned long hva, + gpa_t gpa, void *data) +{ + pte_t *pte = (pte_t *)data; + + spin_lock(&kvm->arch.pgd_lock); + stage2_set_pte(kvm, NULL, gpa, pte); + spin_unlock(&kvm->arch.pgd_lock); +} + + +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +{ + pte_t stage2_pte; + + if (!kvm->arch.pgd) + return; + + stage2_pte = pfn_pte(pte_pfn(pte), PAGE_KVM_GUEST); + handle_hva_to_gpa(kvm, hva, &kvm_set_spte_handler, &stage2_pte); +} + +void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu) +{ + mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); +}