From patchwork Thu Apr 14 13:21:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suzuki K Poulose X-Patchwork-Id: 8836271 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 40C2F9F36E for ; Thu, 14 Apr 2016 13:22:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4DDE0202DD for ; Thu, 14 Apr 2016 13:22:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 34B3C20272 for ; Thu, 14 Apr 2016 13:22:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932494AbcDNNWJ (ORCPT ); Thu, 14 Apr 2016 09:22:09 -0400 Received: from foss.arm.com ([217.140.101.70]:42610 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932485AbcDNNWI (ORCPT ); Thu, 14 Apr 2016 09:22:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 880E859F; Thu, 14 Apr 2016 06:20:48 -0700 (PDT) Received: from e106634-lin.cambridge.arm.com (e106634-lin.cambridge.arm.com [10.1.209.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 119A53F25F; Thu, 14 Apr 2016 06:22:00 -0700 (PDT) From: Suzuki K Poulose To: linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, marc.zyngier@arm.com, christoffer.dall@linaro.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, Suzuki K Poulose Subject: [PATCH v2 16/17] kvm-arm: Cleanup stage2 pgd handling Date: Thu, 14 Apr 2016 14:21:04 +0100 Message-Id: <1460640065-27658-17-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1460640065-27658-1-git-send-email-suzuki.poulose@arm.com> References: <1460640065-27658-1-git-send-email-suzuki.poulose@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we don't have any fake page table levels for arm64, cleanup the common code to get rid of the dead code. Cc: Marc Zyngier Acked-by: Christoffer Dall Signed-off-by: Suzuki K Poulose --- Changes since V1: - Un-inline kvm_alloc_hwpgd(), kvm_get_hwpgd_size() --- arch/arm/include/asm/kvm_mmu.h | 19 ------------------- arch/arm/kvm/arm.c | 2 +- arch/arm/kvm/mmu.c | 37 ++++++------------------------------- arch/arm64/include/asm/kvm_mmu.h | 18 ------------------ 4 files changed, 7 insertions(+), 69 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 50aa901..f8b7920 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -146,25 +146,6 @@ static inline bool kvm_page_empty(void *ptr) #define hyp_pmd_table_empty(pmdp) kvm_page_empty(pmdp) #define hyp_pud_table_empty(pudp) (0) -static inline void *kvm_get_hwpgd(struct kvm *kvm) -{ - return kvm->arch.pgd; -} - -static inline unsigned int kvm_get_hwpgd_size(void) -{ - return PTRS_PER_S2_PGD * sizeof(pgd_t); -} - -static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd) -{ - return hwpgd; -} - -static inline void kvm_free_fake_pgd(pgd_t *pgd) -{ -} - struct kvm; #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index dded1b7..be4b639 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -448,7 +448,7 @@ static void update_vttbr(struct kvm *kvm) kvm_next_vmid &= (1 << kvm_vmid_bits) - 1; /* update vttbr to be used with the new vmid */ - pgd_phys = virt_to_phys(kvm_get_hwpgd(kvm)); + pgd_phys = virt_to_phys(kvm->arch.pgd); BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK); vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); kvm->arch.vttbr = pgd_phys | vmid; diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index d3fa96e..42eefab 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -43,6 +43,7 @@ static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; +#define S2_PGD_SIZE (PTRS_PER_S2_PGD * sizeof(pgd_t)) #define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t)) #define KVM_S2PTE_FLAG_IS_IOMAP (1UL << 0) @@ -736,20 +737,6 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE); } -/* Free the HW pgd, one page at a time */ -static void kvm_free_hwpgd(void *hwpgd) -{ - free_pages_exact(hwpgd, kvm_get_hwpgd_size()); -} - -/* Allocate the HW PGD, making sure that each page gets its own refcount */ -static void *kvm_alloc_hwpgd(void) -{ - unsigned int size = kvm_get_hwpgd_size(); - - return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); -} - /** * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation. * @kvm: The KVM struct pointer for the VM. @@ -764,29 +751,17 @@ static void *kvm_alloc_hwpgd(void) int kvm_alloc_stage2_pgd(struct kvm *kvm) { pgd_t *pgd; - void *hwpgd; if (kvm->arch.pgd != NULL) { kvm_err("kvm_arch already initialized?\n"); return -EINVAL; } - hwpgd = kvm_alloc_hwpgd(); - if (!hwpgd) + /* Allocate the HW PGD, making sure that each page gets its own refcount */ + pgd = alloc_pages_exact(S2_PGD_SIZE, GFP_KERNEL | __GFP_ZERO); + if (!pgd) return -ENOMEM; - /* - * When the kernel uses more levels of page tables than the - * guest, we allocate a fake PGD and pre-populate it to point - * to the next-level page table, which will be the real - * initial page table pointed to by the VTTBR. - */ - pgd = kvm_setup_fake_pgd(hwpgd); - if (IS_ERR(pgd)) { - kvm_free_hwpgd(hwpgd); - return PTR_ERR(pgd); - } - kvm_clean_pgd(pgd); kvm->arch.pgd = pgd; return 0; @@ -874,8 +849,8 @@ void kvm_free_stage2_pgd(struct kvm *kvm) return; unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); - kvm_free_hwpgd(kvm_get_hwpgd(kvm)); - kvm_free_fake_pgd(kvm->arch.pgd); + /* Free the HW pgd, one page at a time */ + free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE); kvm->arch.pgd = NULL; } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e3fee0a..249c4fc 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -141,24 +141,6 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY; } -static inline void *kvm_get_hwpgd(struct kvm *kvm) -{ - return kvm->arch.pgd; -} - -static inline unsigned int kvm_get_hwpgd_size(void) -{ - return PTRS_PER_S2_PGD * sizeof(pgd_t); -} - -static inline pgd_t *kvm_setup_fake_pgd(pgd_t *hwpgd) -{ - return hwpgd; -} - -static inline void kvm_free_fake_pgd(pgd_t *pgd) -{ -} static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr);