From patchwork Mon Mar 14 16:53:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suzuki K Poulose X-Patchwork-Id: 8581541 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 75A649F294 for ; Mon, 14 Mar 2016 16:57:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4EFAF20212 for ; Mon, 14 Mar 2016 16:57:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1B642020F for ; Mon, 14 Mar 2016 16:57:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1afVmM-00073f-9d; Mon, 14 Mar 2016 16:55:38 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1afVko-0004n9-D3 for linux-arm-kernel@lists.infradead.org; Mon, 14 Mar 2016 16:54:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D9BA5ED; Mon, 14 Mar 2016 09:52:30 -0700 (PDT) Received: from e106634-lin.cambridge.arm.com (e106634-lin.cambridge.arm.com [10.1.209.25]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AD16A3F213; Mon, 14 Mar 2016 09:53:29 -0700 (PDT) From: Suzuki K Poulose To: christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [RFC PATCH 06/12] kvm-arm: Pass kvm parameter for pagetable helpers Date: Mon, 14 Mar 2016 16:53:05 +0000 Message-Id: <1457974391-28456-7-git-send-email-suzuki.poulose@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1457974391-28456-1-git-send-email-suzuki.poulose@arm.com> References: <1457974391-28456-1-git-send-email-suzuki.poulose@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160314_095402_749109_2DBC377B X-CRM114-Status: GOOD ( 11.86 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, kvm@vger.kernel.org, Suzuki K Poulose , catalin.marinas@arm.com, will.deacon@arm.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Pass 'kvm' to existing kvm_p.d_* page table wrappers to prepare them to choose between hyp and stage2 page table. No functional changes yet. Also while at it, convert them to static inline functions. Signed-off-by: Suzuki K Poulose --- arch/arm/include/asm/kvm_mmu.h | 38 +++++++++++++++++++++++++++----------- arch/arm/kvm/mmu.c | 34 +++++++++++++++++----------------- arch/arm64/include/asm/kvm_mmu.h | 31 ++++++++++++++++++++++++++----- 3 files changed, 70 insertions(+), 33 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 4448e77..17c6781 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -45,6 +45,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #include @@ -135,22 +136,37 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; } -#define kvm_pud_huge(_x) pud_huge(_x) +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud) +{ + return pud_huge(pud); +} + /* Open coded p*d_addr_end that can deal with 64bit addresses */ -#define kvm_pgd_addr_end(addr, end) \ -({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ - (__boundary - 1 < (end) - 1)? __boundary: (end); \ -}) +static inline phys_addr_t +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK; + return (boundary - 1 < end - 1) ? boundary : end; +} -#define kvm_pud_addr_end(addr,end) (end) +static inline phys_addr_t +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + return end; +} -#define kvm_pmd_addr_end(addr, end) \ -({ u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK; \ - (__boundary - 1 < (end) - 1)? __boundary: (end); \ -}) +static inline phys_addr_t +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK; + return (boundary - 1 < end - 1) ? boundary : end; +} -#define kvm_pgd_index(addr) pgd_index(addr) +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr) +{ + return pgd_index(addr); +} static inline bool kvm_page_empty(void *ptr) { diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index d1e9a71..22b4c99 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -165,7 +165,7 @@ static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr) static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr) { pmd_t *pmd_table = pmd_offset(pud, 0); - VM_BUG_ON(pud_huge(*pud)); + VM_BUG_ON(kvm_pud_huge(kvm, *pud)); pud_clear(pud); kvm_tlb_flush_vmid_ipa(kvm, addr); pmd_free(NULL, pmd_table); @@ -236,7 +236,7 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud, start_pmd = pmd = pmd_offset(pud, addr); do { - next = kvm_pmd_addr_end(addr, end); + next = kvm_pmd_addr_end(kvm, addr, end); if (!pmd_none(*pmd)) { if (huge_pmd(*pmd)) { pmd_t old_pmd = *pmd; @@ -265,9 +265,9 @@ static void unmap_puds(struct kvm *kvm, pgd_t *pgd, start_pud = pud = pud_offset(pgd, addr); do { - next = kvm_pud_addr_end(addr, end); + next = kvm_pud_addr_end(kvm, addr, end); if (!pud_none(*pud)) { - if (pud_huge(*pud)) { + if (kvm_pud_huge(kvm, *pud)) { pud_t old_pud = *pud; pud_clear(pud); @@ -294,9 +294,9 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, phys_addr_t addr = start, end = start + size; phys_addr_t next; - pgd = pgdp + kvm_pgd_index(addr); + pgd = pgdp + kvm_pgd_index(kvm, addr); do { - next = kvm_pgd_addr_end(addr, end); + next = kvm_pgd_addr_end(kvm, addr, end); if (!pgd_none(*pgd)) unmap_puds(kvm, pgd, addr, next); } while (pgd++, addr = next, addr != end); @@ -322,7 +322,7 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud, pmd = pmd_offset(pud, addr); do { - next = kvm_pmd_addr_end(addr, end); + next = kvm_pmd_addr_end(kvm, addr, end); if (!pmd_none(*pmd)) { if (huge_pmd(*pmd)) kvm_flush_dcache_pmd(*pmd); @@ -340,9 +340,9 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd, pud = pud_offset(pgd, addr); do { - next = kvm_pud_addr_end(addr, end); + next = kvm_pud_addr_end(kvm, addr, end); if (!pud_none(*pud)) { - if (pud_huge(*pud)) + if (kvm_pud_huge(kvm, *pud)) kvm_flush_dcache_pud(*pud); else stage2_flush_pmds(kvm, pud, addr, next); @@ -358,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm, phys_addr_t next; pgd_t *pgd; - pgd = kvm->arch.pgd + kvm_pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr); do { - next = kvm_pgd_addr_end(addr, end); + next = kvm_pgd_addr_end(kvm, addr, end); stage2_flush_puds(kvm, pgd, addr, next); } while (pgd++, addr = next, addr != end); } @@ -802,7 +802,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache pgd_t *pgd; pud_t *pud; - pgd = kvm->arch.pgd + kvm_pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr); if (WARN_ON(pgd_none(*pgd))) { if (!cache) return NULL; @@ -1040,7 +1040,7 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end) pmd = pmd_offset(pud, addr); do { - next = kvm_pmd_addr_end(addr, end); + next = kvm_pmd_addr_end(NULL, addr, end); if (!pmd_none(*pmd)) { if (huge_pmd(*pmd)) { if (!kvm_s2pmd_readonly(pmd)) @@ -1067,10 +1067,10 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) pud = pud_offset(pgd, addr); do { - next = kvm_pud_addr_end(addr, end); + next = kvm_pud_addr_end(NULL, addr, end); if (!pud_none(*pud)) { /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(kvm_pud_huge(*pud)); + BUG_ON(kvm_pud_huge(NULL, *pud)); stage2_wp_pmds(pud, addr, next); } } while (pud++, addr = next, addr != end); @@ -1087,7 +1087,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) pgd_t *pgd; phys_addr_t next; - pgd = kvm->arch.pgd + kvm_pgd_index(addr); + pgd = kvm->arch.pgd + kvm_pgd_index(kvm, addr); do { /* * Release kvm_mmu_lock periodically if the memory region is @@ -1099,7 +1099,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) if (need_resched() || spin_needbreak(&kvm->mmu_lock)) cond_resched_lock(&kvm->mmu_lock); - next = kvm_pgd_addr_end(addr, end); + next = kvm_pgd_addr_end(kvm, addr, end); if (pgd_present(*pgd)) stage2_wp_puds(pgd, addr, next); } while (pgd++, addr = next, addr != end); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a01d87d..416ca23 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -71,6 +71,7 @@ #include #include #include +#include #define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET) @@ -141,11 +142,28 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY; } -#define kvm_pud_huge(_x) pud_huge(_x) +static inline int kvm_pud_huge(struct kvm *kvm, pud_t pud) +{ + return pud_huge(pud); +} + +static inline phys_addr_t +kvm_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + return pgd_addr_end(addr, end); +} + +static inline phys_addr_t +kvm_pud_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + return pud_addr_end(addr, end); +} -#define kvm_pgd_addr_end(addr, end) pgd_addr_end(addr, end) -#define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end) -#define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end) +static inline phys_addr_t +kvm_pmd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + return pmd_addr_end(addr, end); +} /* * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address @@ -161,7 +179,10 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) #endif #define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT) -#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1)) +static inline phys_addr_t kvm_pgd_index(struct kvm *kvm, phys_addr_t addr) +{ + return (addr >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1); +} /* * If we are concatenating first level stage-2 page tables, we would have less