From patchwork Wed Jan 17 10:39:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Jones X-Patchwork-Id: 10169001 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7DB31601D3 for ; Wed, 17 Jan 2018 10:40:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A4D628438 for ; Wed, 17 Jan 2018 10:40:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F02B2848B; Wed, 17 Jan 2018 10:40:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB0FF28438 for ; Wed, 17 Jan 2018 10:40:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752598AbeAQKkY (ORCPT ); Wed, 17 Jan 2018 05:40:24 -0500 Received: from mx1.redhat.com ([209.132.183.28]:43182 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752539AbeAQKkW (ORCPT ); Wed, 17 Jan 2018 05:40:22 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F009425774; Wed, 17 Jan 2018 10:40:21 +0000 (UTC) Received: from kamzik.brq.redhat.com (unknown [10.43.2.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D6EC60C9C; Wed, 17 Jan 2018 10:40:19 +0000 (UTC) From: Andrew Jones To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, cdall@linaro.org, david@redhat.com, lvivier@redhat.com, thuth@redhat.com Subject: [PATCH kvm-unit-tests v2 03/12] arm/arm64: fix virt_to_phys Date: Wed, 17 Jan 2018 11:39:56 +0100 Message-Id: <20180117104005.29211-4-drjones@redhat.com> In-Reply-To: <20180117104005.29211-1-drjones@redhat.com> References: <20180117104005.29211-1-drjones@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 17 Jan 2018 10:40:22 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since switching to the vm_memalign() allocator virt_to_phys() hasn't been returning the correct address, as it was assuming an identity map. Signed-off-by: Andrew Jones --- lib/arm/asm/page.h | 8 +++----- lib/arm/asm/pgtable.h | 16 ++++++++++++---- lib/arm/mmu.c | 20 ++++++++++++++++++++ lib/arm64/asm/page.h | 8 +++----- lib/arm64/asm/pgtable.h | 12 ++++++++++-- 5 files changed, 48 insertions(+), 16 deletions(-) diff --git a/lib/arm/asm/page.h b/lib/arm/asm/page.h index fc1b30e95567..039c9f7b3d49 100644 --- a/lib/arm/asm/page.h +++ b/lib/arm/asm/page.h @@ -34,16 +34,14 @@ typedef struct { pteval_t pgprot; } pgprot_t; #define __pgd(x) ((pgd_t) { (x) } ) #define __pgprot(x) ((pgprot_t) { (x) } ) -#ifndef __virt_to_phys -#define __phys_to_virt(x) ((unsigned long) (x)) -#define __virt_to_phys(x) (x) -#endif - #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define __pa(x) __virt_to_phys((unsigned long)(x)) #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) #define pfn_to_virt(pfn) __va((pfn) << PAGE_SHIFT) +extern phys_addr_t __virt_to_phys(unsigned long addr); +extern unsigned long __phys_to_virt(phys_addr_t addr); + #endif /* !__ASSEMBLY__ */ #endif /* _ASMARM_PAGE_H_ */ diff --git a/lib/arm/asm/pgtable.h b/lib/arm/asm/pgtable.h index a95e63002ef3..b614bce9528a 100644 --- a/lib/arm/asm/pgtable.h +++ b/lib/arm/asm/pgtable.h @@ -14,6 +14,14 @@ * This work is licensed under the terms of the GNU GPL, version 2. */ +/* + * We can convert va <=> pa page table addresses with simple casts + * because we always allocate their pages with alloc_page(), and + * alloc_page() always returns identity mapped pages. + */ +#define pgtable_va(x) ((void *)(unsigned long)(x)) +#define pgtable_pa(x) ((unsigned long)(x)) + #define pgd_none(pgd) (!pgd_val(pgd)) #define pmd_none(pmd) (!pmd_val(pmd)) #define pte_none(pte) (!pte_val(pte)) @@ -32,7 +40,7 @@ static inline pgd_t *pgd_alloc(void) static inline pmd_t *pgd_page_vaddr(pgd_t pgd) { - return __va(pgd_val(pgd) & PHYS_MASK & (s32)PAGE_MASK); + return pgtable_va(pgd_val(pgd) & PHYS_MASK & (s32)PAGE_MASK); } #define pmd_index(addr) \ @@ -52,14 +60,14 @@ static inline pmd_t *pmd_alloc(pgd_t *pgd, unsigned long addr) { if (pgd_none(*pgd)) { pmd_t *pmd = pmd_alloc_one(); - pgd_val(*pgd) = __pa(pmd) | PMD_TYPE_TABLE; + pgd_val(*pgd) = pgtable_pa(pmd) | PMD_TYPE_TABLE; } return pmd_offset(pgd, addr); } static inline pte_t *pmd_page_vaddr(pmd_t pmd) { - return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); + return pgtable_va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); } #define pte_index(addr) \ @@ -79,7 +87,7 @@ static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr) { if (pmd_none(*pmd)) { pte_t *pte = pte_alloc_one(); - pmd_val(*pmd) = __pa(pte) | PMD_TYPE_TABLE; + pmd_val(*pmd) = pgtable_pa(pte) | PMD_TYPE_TABLE; } return pte_offset(pmd, addr); } diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index b9387efe0065..9da3be38b339 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -171,3 +171,23 @@ void *setup_mmu(phys_addr_t phys_end) mmu_enable(mmu_idmap); return mmu_idmap; } + +phys_addr_t __virt_to_phys(unsigned long addr) +{ + if (mmu_enabled()) { + pgd_t *pgtable = current_thread_info()->pgtable; + return virt_to_pte_phys(pgtable, (void *)addr); + } + return addr; +} + +unsigned long __phys_to_virt(phys_addr_t addr) +{ + /* + * We don't guarantee that phys_to_virt(virt_to_phys(vaddr)) == vaddr, but + * the default page tables do identity map all physical addresses, which + * means phys_to_virt(virt_to_phys((void *)paddr)) == paddr. + */ + assert(!mmu_enabled() || __virt_to_phys(addr) == addr); + return addr; +} diff --git a/lib/arm64/asm/page.h b/lib/arm64/asm/page.h index f06a6941971c..46af552b91c7 100644 --- a/lib/arm64/asm/page.h +++ b/lib/arm64/asm/page.h @@ -42,16 +42,14 @@ typedef struct { pgd_t pgd; } pmd_t; #define pmd_val(x) (pgd_val((x).pgd)) #define __pmd(x) ((pmd_t) { __pgd(x) } ) -#ifndef __virt_to_phys -#define __phys_to_virt(x) ((unsigned long) (x)) -#define __virt_to_phys(x) (x) -#endif - #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define __pa(x) __virt_to_phys((unsigned long)(x)) #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) #define pfn_to_virt(pfn) __va((pfn) << PAGE_SHIFT) +extern phys_addr_t __virt_to_phys(unsigned long addr); +extern unsigned long __phys_to_virt(phys_addr_t addr); + #endif /* !__ASSEMBLY__ */ #endif /* _ASMARM64_PAGE_H_ */ diff --git a/lib/arm64/asm/pgtable.h b/lib/arm64/asm/pgtable.h index 941a850c3f30..5860abe5b08b 100644 --- a/lib/arm64/asm/pgtable.h +++ b/lib/arm64/asm/pgtable.h @@ -18,6 +18,14 @@ #include #include +/* + * We can convert va <=> pa page table addresses with simple casts + * because we always allocate their pages with alloc_page(), and + * alloc_page() always returns identity mapped pages. + */ +#define pgtable_va(x) ((void *)(unsigned long)(x)) +#define pgtable_pa(x) ((unsigned long)(x)) + #define pgd_none(pgd) (!pgd_val(pgd)) #define pmd_none(pmd) (!pmd_val(pmd)) #define pte_none(pte) (!pte_val(pte)) @@ -40,7 +48,7 @@ static inline pgd_t *pgd_alloc(void) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { - return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); + return pgtable_va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); } #define pte_index(addr) \ @@ -60,7 +68,7 @@ static inline pte_t *pte_alloc(pmd_t *pmd, unsigned long addr) { if (pmd_none(*pmd)) { pte_t *pte = pte_alloc_one(); - pmd_val(*pmd) = __pa(pte) | PMD_TYPE_TABLE; + pmd_val(*pmd) = pgtable_pa(pte) | PMD_TYPE_TABLE; } return pte_offset(pmd, addr); }